Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Image processing articles from across Nature Portfolio

Image processing is manipulation of an image that has been digitised and uploaded into a computer. Software programs modify the image to make it more useful, and can for example be used to enable image recognition.

Latest Research and Reviews

research papers on applications of image processing

Computer vision for kinematic metrics of the drinking task in a pilot study of neurotypical participants

  • Justin Huber
  • Stacey Slone

research papers on applications of image processing

A multicentre study to evaluate the diagnostic performance of a novel CAD software, DecXpert, for radiological diagnosis of tuberculosis in the northern Indian population

  • Ankit Shukla

research papers on applications of image processing

Automatic ploidy prediction and quality assessment of human blastocysts using time-lapse imaging

Assessing human embryos is crucial for in vitro fertilization, a task being revolutionized by artificial intelligence. Here, the authors introduce BELA, an automated AI model for predicting embryo ploidy status and quality using time-lapse imaging.

  • Suraj Rajendran
  • Matthew Brendel
  • Iman Hajirasouliha

research papers on applications of image processing

An encryption algorithm for color images based on an improved dual-chaotic system combined with DNA encoding

  • Tingting Liu

research papers on applications of image processing

Automated Association for Osteosynthesis Foundation and Orthopedic Trauma Association classification of pelvic fractures on pelvic radiographs using deep learning

  • Seung Hwan Lee
  • Kwang Gi Kim

research papers on applications of image processing

A pathology foundation model for cancer diagnosis and prognosis prediction

A study describes the development of a generalizable foundation machine learning framework to extract pathology imaging features for cancer diagnosis and prognosis prediction.

  • Junhan Zhao
  • Kun-Hsing Yu

Advertisement

News and Comment

Cell painting gallery: an open resource for image-based profiling.

  • Erin Weisbart
  • Ankur Kumar
  • Shantanu Singh

research papers on applications of image processing

The promise of machine learning approaches to capture cellular senescence heterogeneity

The identification of senescent cells is a long-standing unresolved challenge, owing to their intrinsic heterogeneity and the lack of universal markers. In this Comment, we discuss the recent advent of machine-learning-based approaches to identifying senescent cells by using unbiased, multiparameter morphological assessments, and how these tools can assist future senescence research.

  • Imanol Duran
  • Cleo L. Bishop
  • Ryan Wallis

research papers on applications of image processing

Visual interpretability of bioimaging deep learning models

The success of deep learning in analyzing bioimages comes at the expense of biologically meaningful interpretations. We review the state of the art of explainable artificial intelligence (XAI) in bioimaging and discuss its potential in hypothesis generation and data-driven discovery.

  • Assaf Zaritsky

Next-generation AI for connectomics

New approaches in artificial intelligence (AI), such as foundation models and synthetic data, are having a substantial impact on many areas of applied computer science. Here we discuss the potential to apply these developments to the computational challenges associated with producing synapse-resolution maps of nervous systems, an area in which major ambitions are currently bottlenecked by AI performance.

  • Michał Januszewski

research papers on applications of image processing

Multimodal large language models for bioimage analysis

Multimodal large language models have been recognized as a historical milestone in the field of artificial intelligence and have demonstrated revolutionary potentials not only in commercial applications, but also for many scientific fields. Here we give a brief overview of multimodal large language models through the lens of bioimage analysis and discuss how we could build these models as a community to facilitate biology research.

  • Shanghang Zhang
  • Jianxu Chen

research papers on applications of image processing

Neurotransmitters at a glance

Machine learning approaches can distinguish six different classes of presynapses from electron micrographs across the Drosophila brain.

  • Rita Strack

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research papers on applications of image processing

SPECIALTY GRAND CHALLENGE article

Grand challenges in image processing.

Frdric Dufaux

  • Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et Systèmes, Gif-sur-Yvette, France

Introduction

The field of image processing has been the subject of intensive research and development activities for several decades. This broad area encompasses topics such as image/video processing, image/video analysis, image/video communications, image/video sensing, modeling and representation, computational imaging, electronic imaging, information forensics and security, 3D imaging, medical imaging, and machine learning applied to these respective topics. Hereafter, we will consider both image and video content (i.e. sequence of images), and more generally all forms of visual information.

Rapid technological advances, especially in terms of computing power and network transmission bandwidth, have resulted in many remarkable and successful applications. Nowadays, images are ubiquitous in our daily life. Entertainment is one class of applications that has greatly benefited, including digital TV (e.g., broadcast, cable, and satellite TV), Internet video streaming, digital cinema, and video games. Beyond entertainment, imaging technologies are central in many other applications, including digital photography, video conferencing, video monitoring and surveillance, satellite imaging, but also in more distant domains such as healthcare and medicine, distance learning, digital archiving, cultural heritage or the automotive industry.

In this paper, we highlight a few research grand challenges for future imaging and video systems, in order to achieve breakthroughs to meet the growing expectations of end users. Given the vastness of the field, this list is by no means exhaustive.

A Brief Historical Perspective

We first briefly discuss a few key milestones in the field of image processing. Key inventions in the development of photography and motion pictures can be traced to the 19th century. The earliest surviving photograph of a real-world scene was made by Nicéphore Niépce in 1827 ( Hirsch, 1999 ). The Lumière brothers made the first cinematographic film in 1895, with a public screening the same year ( Lumiere, 1996 ). After decades of remarkable developments, the second half of the 20th century saw the emergence of new technologies launching the digital revolution. While the first prototype digital camera using a Charge-Coupled Device (CCD) was demonstrated in 1975, the first commercial consumer digital cameras started appearing in the early 1990s. These digital cameras quickly surpassed cameras using films and the digital revolution in the field of imaging was underway. As a key consequence, the digital process enabled computational imaging, in other words the use of sophisticated processing algorithms in order to produce high quality images.

In 1992, the Joint Photographic Experts Group (JPEG) released the JPEG standard for still image coding ( Wallace, 1992 ). In parallel, in 1993, the Moving Picture Experts Group (MPEG) published its first standard for coding of moving pictures and associated audio, MPEG-1 ( Le Gall, 1991 ), and a few years later MPEG-2 ( Haskell et al., 1996 ). By guaranteeing interoperability, these standards have been essential in many successful applications and services, for both the consumer and business markets. In particular, it is remarkable that, almost 30 years later, JPEG remains the dominant format for still images and photographs.

In the late 2000s and early 2010s, we could observe a paradigm shift with the appearance of smartphones integrating a camera. Thanks to advances in computational photography, these new smartphones soon became capable of rivaling the quality of consumer digital cameras at the time. Moreover, these smartphones were also capable of acquiring video sequences. Almost concurrently, another key evolution was the development of high bandwidth networks. In particular, the launch of 4G wireless services circa 2010 enabled users to quickly and efficiently exchange multimedia content. From this point, most of us are carrying a camera, anywhere and anytime, allowing to capture images and videos at will and to seamlessly exchange them with our contacts.

As a direct consequence of the above developments, we are currently observing a boom in the usage of multimedia content. It is estimated that today 3.2 billion images are shared each day on social media platforms, and 300 h of video are uploaded every minute on YouTube 1 . In a 2019 report, Cisco estimated that video content represented 75% of all Internet traffic in 2017, and this share is forecasted to grow to 82% in 2022 ( Cisco, 2019 ). While Internet video streaming and Over-The-Top (OTT) media services account for a significant bulk of this traffic, other applications are also expected to see significant increases, including video surveillance and Virtual Reality (VR)/Augmented Reality (AR).

Hyper-Realistic and Immersive Imaging

A major direction and key driver to research and development activities over the years has been the objective to deliver an ever-improving image quality and user experience.

For instance, in the realm of video, we have observed constantly increasing spatial and temporal resolutions, with the emergence nowadays of Ultra High Definition (UHD). Another aim has been to provide a sense of the depth in the scene. For this purpose, various 3D video representations have been explored, including stereoscopic 3D and multi-view ( Dufaux et al., 2013 ).

In this context, the ultimate goal is to be able to faithfully represent the physical world and to deliver an immersive and perceptually hyperrealist experience. For this purpose, we discuss hereafter some emerging innovations. These developments are also very relevant in VR and AR applications ( Slater, 2014 ). Finally, while this paper is only focusing on the visual information processing aspects, it is obvious that emerging display technologies ( Masia et al., 2013 ) and audio also plays key roles in many application scenarios.

Light Fields, Point Clouds, Volumetric Imaging

In order to wholly represent a scene, the light information coming from all the directions has to be represented. For this purpose, the 7D plenoptic function is a key concept ( Adelson and Bergen, 1991 ), although it is unmanageable in practice.

By introducing additional constraints, the light field representation collects radiance from rays in all directions. Therefore, it contains a much richer information, when compared to traditional 2D imaging that captures a 2D projection of the light in the scene integrating the angular domain. For instance, this allows post-capture processing such as refocusing and changing the viewpoint. However, it also entails several technical challenges, in terms of acquisition and calibration, as well as computational image processing steps including depth estimation, super-resolution, compression and image synthesis ( Ihrke et al., 2016 ; Wu et al., 2017 ). The resolution trade-off between spatial and angular resolutions is a fundamental issue. With a significant fraction of the earlier work focusing on static light fields, it is also expected that dynamic light field videos will stimulate more interest in the future. In particular, dense multi-camera arrays are becoming more tractable. Finally, the development of efficient light field compression and streaming techniques is a key enabler in many applications ( Conti et al., 2020 ).

Another promising direction is to consider a point cloud representation. A point cloud is a set of points in the 3D space represented by their spatial coordinates and additional attributes, including color pixel values, normals, or reflectance. They are often very large, easily ranging in the millions of points, and are typically sparse. One major distinguishing feature of point clouds is that, unlike images, they do not have a regular structure, calling for new algorithms. To remove the noise often present in acquired data, while preserving the intrinsic characteristics, effective 3D point cloud filtering approaches are needed ( Han et al., 2017 ). It is also important to develop efficient techniques for Point Cloud Compression (PCC). For this purpose, MPEG is developing two standards: Geometry-based PCC (G-PCC) and Video-based PCC (V-PCC) ( Graziosi et al., 2020 ). G-PCC considers the point cloud in its native form and compress it using 3D data structures such as octrees. Conversely, V-PCC projects the point cloud onto 2D planes and then applies existing video coding schemes. More recently, deep learning-based approaches for PCC have been shown to be effective ( Guarda et al., 2020 ). Another challenge is to develop generic and robust solutions able to handle potentially widely varying characteristics of point clouds, e.g. in terms of size and non-uniform density. Efficient solutions for dynamic point clouds are also needed. Finally, while many techniques focus on the geometric information or the attributes independently, it is paramount to process them jointly.

High Dynamic Range and Wide Color Gamut

The human visual system is able to perceive, using various adaptation mechanisms, a broad range of luminous intensities, from very bright to very dark, as experienced every day in the real world. Nonetheless, current imaging technologies are still limited in terms of capturing or rendering such a wide range of conditions. High Dynamic Range (HDR) imaging aims at addressing this issue. Wide Color Gamut (WCG) is also often associated with HDR in order to provide a wider colorimetry.

HDR has reached some levels of maturity in the context of photography. However, extending HDR to video sequences raises scientific challenges in order to provide high quality and cost-effective solutions, impacting the whole imaging processing pipeline, including content acquisition, tone reproduction, color management, coding, and display ( Dufaux et al., 2016 ; Chalmers and Debattista, 2017 ). Backward compatibility with legacy content and traditional systems is another issue. Despite recent progress, the potential of HDR has not been fully exploited yet.

Coding and Transmission

Three decades of standardization activities have continuously improved the hybrid video coding scheme based on the principles of transform coding and predictive coding. The Versatile Video Coding (VVC) standard has been finalized in 2020 ( Bross et al., 2021 ), achieving approximately 50% bit rate reduction for the same subjective quality when compared to its predecessor, High Efficiency Video Coding (HEVC). While substantially outperforming VVC in the short term may be difficult, one encouraging direction is to rely on improved perceptual models to further optimize compression in terms of visual quality. Another direction, which has already shown promising results, is to apply deep learning-based approaches ( Ding et al., 2021 ). Here, one key issue is the ability to generalize these deep models to a wide diversity of video content. The second key issue is the implementation complexity, both in terms of computation and memory requirements, which is a significant obstacle to a widespread deployment. Besides, the emergence of new video formats targeting immersive communications is also calling for new coding schemes ( Wien et al., 2019 ).

Considering that in many application scenarios, videos are processed by intelligent analytic algorithms rather than viewed by users, another interesting track is the development of video coding for machines ( Duan et al., 2020 ). In this context, the compression is optimized taking into account the performance of video analysis tasks.

The push toward hyper-realistic and immersive visual communications entails most often an increasing raw data rate. Despite improved compression schemes, more transmission bandwidth is needed. Moreover, some emerging applications, such as VR/AR, autonomous driving, and Industry 4.0, bring a strong requirement for low latency transmission, with implications on both the imaging processing pipeline and the transmission channel. In this context, the emergence of 5G wireless networks will positively contribute to the deployment of new multimedia applications, and the development of future wireless communication technologies points toward promising advances ( Da Costa and Yang, 2020 ).

Human Perception and Visual Quality Assessment

It is important to develop effective models of human perception. On the one hand, it can contribute to the development of perceptually inspired algorithms. On the other hand, perceptual quality assessment methods are needed in order to optimize and validate new imaging solutions.

The notion of Quality of Experience (QoE) relates to the degree of delight or annoyance of the user of an application or service ( Le Callet et al., 2012 ). QoE is strongly linked to subjective and objective quality assessment methods. Many years of research have resulted in the successful development of perceptual visual quality metrics based on models of human perception ( Lin and Kuo, 2011 ; Bovik, 2013 ). More recently, deep learning-based approaches have also been successfully applied to this problem ( Bosse et al., 2017 ). While these perceptual quality metrics have achieved good performances, several significant challenges remain. First, when applied to video sequences, most current perceptual metrics are applied on individual images, neglecting temporal modeling. Second, whereas color is a key attribute, there are currently no widely accepted perceptual quality metrics explicitly considering color. Finally, new modalities, such as 360° videos, light fields, point clouds, and HDR, require new approaches.

Another closely related topic is image esthetic assessment ( Deng et al., 2017 ). The esthetic quality of an image is affected by numerous factors, such as lighting, color, contrast, and composition. It is useful in different application scenarios such as image retrieval and ranking, recommendation, and photos enhancement. While earlier attempts have used handcrafted features, most recent techniques to predict esthetic quality are data driven and based on deep learning approaches, leveraging the availability of large annotated datasets for training ( Murray et al., 2012 ). One key challenge is the inherently subjective nature of esthetics assessment, resulting in ambiguity in the ground-truth labels. Another important issue is to explain the behavior of deep esthetic prediction models.

Analysis, Interpretation and Understanding

Another major research direction has been the objective to efficiently analyze, interpret and understand visual data. This goal is challenging, due to the high diversity and complexity of visual data. This has led to many research activities, involving both low-level and high-level analysis, addressing topics such as image classification and segmentation, optical flow, image indexing and retrieval, object detection and tracking, and scene interpretation and understanding. Hereafter, we discuss some trends and challenges.

Keypoints Detection and Local Descriptors

Local imaging matching has been the cornerstone of many analysis tasks. It involves the detection of keypoints, i.e. salient visual points that can be robustly and repeatedly detected, and descriptors, i.e. a compact signature locally describing the visual features at each keypoint. It allows to subsequently compute pairwise matching between the features to reveal local correspondences. In this context, several frameworks have been proposed, including Scale Invariant Feature Transform (SIFT) ( Lowe, 2004 ) and Speeded Up Robust Features (SURF) ( Bay et al., 2008 ), and later binary variants including Binary Robust Independent Elementary Feature (BRIEF) ( Calonder et al., 2010 ), Oriented FAST and Rotated BRIEF (ORB) ( Rublee et al., 2011 ) and Binary Robust Invariant Scalable Keypoints (BRISK) ( Leutenegger et al., 2011 ). Although these approaches exhibit scale and rotation invariance, they are less suited to deal with large 3D distortions such as perspective deformations, out-of-plane rotations, and significant viewpoint changes. Besides, they tend to fail under significantly varying and challenging illumination conditions.

These traditional approaches based on handcrafted features have been successfully applied to problems such as image and video retrieval, object detection, visual Simultaneous Localization And Mapping (SLAM), and visual odometry. Besides, the emergence of new imaging modalities as introduced above can also be beneficial for image analysis tasks, including light fields ( Galdi et al., 2019 ), point clouds ( Guo et al., 2020 ), and HDR ( Rana et al., 2018 ). However, when applied to high-dimensional visual data for semantic analysis and understanding, these approaches based on handcrafted features have been supplanted in recent years by approaches based on deep learning.

Deep Learning-Based Methods

Data-driven deep learning-based approaches ( LeCun et al., 2015 ), and in particular the Convolutional Neural Network (CNN) architecture, represent nowadays the state-of-the-art in terms of performances for complex pattern recognition tasks in scene analysis and understanding. By combining multiple processing layers, deep models are able to learn data representations with different levels of abstraction.

Supervised learning is the most common form of deep learning. It requires a large and fully labeled training dataset, a typically time-consuming and expensive process needed whenever tackling a new application scenario. Moreover, in some specialized domains, e.g. medical data, it can be very difficult to obtain annotations. To alleviate this major burden, methods such as transfer learning and weakly supervised learning have been proposed.

In another direction, deep models have been shown to be vulnerable to adversarial attacks ( Akhtar and Mian, 2018 ). Those attacks consist in introducing subtle perturbations to the input, such that the model predicts an incorrect output. For instance, in the case of images, imperceptible pixel differences are able to fool deep learning models. Such adversarial attacks are definitively an important obstacle to the successful deployment of deep learning, especially in applications where safety and security are critical. While some early solutions have been proposed, a significant challenge is to develop effective defense mechanisms against those attacks.

Finally, another challenge is to enable low complexity and efficient implementations. This is especially important for mobile or embedded applications. For this purpose, further interactions between signal processing and machine learning can potentially bring additional benefits. For instance, one direction is to compress deep neural networks in order to enable their more efficient handling. Moreover, by combining traditional processing techniques with deep learning models, it is possible to develop low complexity solutions while preserving high performance.

Explainability in Deep Learning

While data-driven deep learning models often achieve impressive performances on many visual analysis tasks, their black-box nature often makes it inherently very difficult to understand how they reach a predicted output and how it relates to particular characteristics of the input data. However, this is a major impediment in many decision-critical application scenarios. Moreover, it is important not only to have confidence in the proposed solution, but also to gain further insights from it. Based on these considerations, some deep learning systems aim at promoting explainability ( Adadi and Berrada, 2018 ; Xie et al., 2020 ). This can be achieved by exhibiting traits related to confidence, trust, safety, and ethics.

However, explainable deep learning is still in its early phase. More developments are needed, in particular to develop a systematic theory of model explanation. Important aspects include the need to understand and quantify risk, to comprehend how the model makes predictions for transparency and trustworthiness, and to quantify the uncertainty in the model prediction. This challenge is key in order to deploy and use deep learning-based solutions in an accountable way, for instance in application domains such as healthcare or autonomous driving.

Self-Supervised Learning

Self-supervised learning refers to methods that learn general visual features from large-scale unlabeled data, without the need for manual annotations. Self-supervised learning is therefore very appealing, as it allows exploiting the vast amount of unlabeled images and videos available. Moreover, it is widely believed that it is closer to how humans actually learn. One common approach is to use the data to provide the supervision, leveraging its structure. More generally, a pretext task can be defined, e.g. image inpainting, colorizing grayscale images, predicting future frames in videos, by withholding some parts of the data and by training the neural network to predict it ( Jing and Tian, 2020 ). By learning an objective function corresponding to the pretext task, the network is forced to learn relevant visual features in order to solve the problem. Self-supervised learning has also been successfully applied to autonomous vehicles perception. More specifically, the complementarity between analytical and learning methods can be exploited to address various autonomous driving perception tasks, without the prerequisite of an annotated data set ( Chiaroni et al., 2021 ).

While good performances have already been obtained using self-supervised learning, further work is still needed. A few promising directions are outlined hereafter. Combining self-supervised learning with other learning methods is a first interesting path. For instance, semi-supervised learning ( Van Engelen and Hoos, 2020 ) and few-short learning ( Fei-Fei et al., 2006 ) methods have been proposed for scenarios where limited labeled data is available. The performance of these methods can potentially be boosted by incorporating a self-supervised pre-training. The pretext task can also serve to add regularization. Another interesting trend in self-supervised learning is to train neural networks with synthetic data. The challenge here is to bridge the domain gap between the synthetic and real data. Finally, another compelling direction is to exploit data from different modalities. A simple example is to consider both the video and audio signals in a video sequence. In another example in the context of autonomous driving, vehicles are typically equipped with multiple sensors, including cameras, LIght Detection And Ranging (LIDAR), Global Positioning System (GPS), and Inertial Measurement Units (IMU). In such cases, it is easy to acquire large unlabeled multimodal datasets, where the different modalities can be effectively exploited in self-supervised learning methods.

Reproducible Research and Large Public Datasets

The reproducible research initiative is another way to further ensure high-quality research for the benefit of our community ( Vandewalle et al., 2009 ). Reproducibility, referring to the ability by someone else working independently to accurately reproduce the results of an experiment, is a key principle of the scientific method. In the context of image and video processing, it is usually not sufficient to provide a detailed description of the proposed algorithm. Most often, it is essential to also provide access to the code and data. This is even more imperative in the case of deep learning-based models.

In parallel, the availability of large public datasets is also highly desirable in order to support research activities. This is especially critical for new emerging modalities or specific application scenarios, where it is difficult to get access to relevant data. Moreover, with the emergence of deep learning, large datasets, along with labels, are often needed for training, which can be another burden.

Conclusion and Perspectives

The field of image processing is very broad and rich, with many successful applications in both the consumer and business markets. However, many technical challenges remain in order to further push the limits in imaging technologies. Two main trends are on the one hand to always improve the quality and realism of image and video content, and on the other hand to be able to effectively interpret and understand this vast and complex amount of visual data. However, the list is certainly not exhaustive and there are many other interesting problems, e.g. related to computational imaging, information security and forensics, or medical imaging. Key innovations will be found at the crossroad of image processing, optics, psychophysics, communication, computer vision, artificial intelligence, and computer graphics. Multi-disciplinary collaborations are therefore critical moving forward, involving actors from both academia and the industry, in order to drive these breakthroughs.

The “Image Processing” section of Frontier in Signal Processing aims at giving to the research community a forum to exchange, discuss and improve new ideas, with the goal to contribute to the further advancement of the field of image processing and to bring exciting innovations in the foreseeable future.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1 https://www.brandwatch.com/blog/amazing-social-media-statistics-and-facts/ (accessed on Feb. 23, 2021).

Adadi, A., and Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access 6, 52138–52160. doi:10.1109/access.2018.2870052

CrossRef Full Text | Google Scholar

Adelson, E. H., and Bergen, J. R. (1991). “The plenoptic function and the elements of early vision” Computational models of visual processing . Cambridge, MA: MIT Press , 3-20.

Google Scholar

Akhtar, N., and Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430. doi:10.1109/access.2018.2807385

Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). Speeded-up robust features (SURF). Computer Vis. image understanding 110 (3), 346–359. doi:10.1016/j.cviu.2007.09.014

Bosse, S., Maniry, D., Müller, K. R., Wiegand, T., and Samek, W. (2017). Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27 (1), 206–219. doi:10.1109/TIP.2017.2760518

PubMed Abstract | CrossRef Full Text | Google Scholar

Bovik, A. C. (2013). Automatic prediction of perceptual image and video quality. Proc. IEEE 101 (9), 2008–2024. doi:10.1109/JPROC.2013.2257632

Bross, B., Chen, J., Ohm, J. R., Sullivan, G. J., and Wang, Y. K. (2021). Developments in international video coding standardization after AVC, with an overview of Versatile Video Coding (VVC). Proc. IEEE . doi:10.1109/JPROC.2020.3043399

Calonder, M., Lepetit, V., Strecha, C., and Fua, P. (2010). Brief: binary robust independent elementary features. In K. Daniilidis, P. Maragos, and N. Paragios (eds) European conference on computer vision . Berlin, Heidelberg: Springer , 778–792. doi:10.1007/978-3-642-15561-1_56

Chalmers, A., and Debattista, K. (2017). HDR video past, present and future: a perspective. Signal. Processing: Image Commun. 54, 49–55. doi:10.1016/j.image.2017.02.003

Chiaroni, F., Rahal, M.-C., Hueber, N., and Dufaux, F. (2021). Self-supervised learning for autonomous vehicles perception: a conciliation between analytical and learning methods. IEEE Signal. Process. Mag. 38 (1), 31–41. doi:10.1109/msp.2020.2977269

Cisco, (20192019). Cisco visual networking index: forecast and trends, 2017-2022 (white paper) , Indianapolis, Indiana: Cisco Press .

Conti, C., Soares, L. D., and Nunes, P. (2020). Dense light field coding: a survey. IEEE Access 8, 49244–49284. doi:10.1109/ACCESS.2020.2977767

Da Costa, D. B., and Yang, H.-C. (2020). Grand challenges in wireless communications. Front. Commun. Networks 1 (1), 1–5. doi:10.3389/frcmn.2020.00001

Deng, Y., Loy, C. C., and Tang, X. (2017). Image aesthetic assessment: an experimental survey. IEEE Signal. Process. Mag. 34 (4), 80–106. doi:10.1109/msp.2017.2696576

Ding, D., Ma, Z., Chen, D., Chen, Q., Liu, Z., and Zhu, F. (2021). Advances in video compression system using deep neural network: a review and case studies . Ithaca, NY: Cornell university .

Duan, L., Liu, J., Yang, W., Huang, T., and Gao, W. (2020). Video coding for machines: a paradigm of collaborative compression and intelligent analytics. IEEE Trans. Image Process. 29, 8680–8695. doi:10.1109/tip.2020.3016485

Dufaux, F., Le Callet, P., Mantiuk, R., and Mrak, M. (2016). High dynamic range video - from acquisition, to display and applications . Cambridge, Massachusetts: Academic Press .

Dufaux, F., Pesquet-Popescu, B., and Cagnazzo, M. (2013). Emerging technologies for 3D video: creation, coding, transmission and rendering . Hoboken, NJ: Wiley .

Fei-Fei, L., Fergus, R., and Perona, P. (2006). One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach Intell. 28 (4), 594–611. doi:10.1109/TPAMI.2006.79

Galdi, C., Chiesa, V., Busch, C., Lobato Correia, P., Dugelay, J.-L., and Guillemot, C. (2019). Light fields for face analysis. Sensors 19 (12), 2687. doi:10.3390/s19122687

Graziosi, D., Nakagami, O., Kuma, S., Zaghetto, A., Suzuki, T., and Tabatabai, A. (2020). An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC). APSIPA Trans. Signal Inf. Process. 9, 2020. doi:10.1017/ATSIP.2020.12

Guarda, A., Rodrigues, N., and Pereira, F. (2020). Adaptive deep learning-based point cloud geometry coding. IEEE J. Selected Top. Signal Process. 15, 415-430. doi:10.1109/mmsp48831.2020.9287060

Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., and Bennamoun, M. (2020). Deep learning for 3D point clouds: a survey. IEEE transactions on pattern analysis and machine intelligence . doi:10.1109/TPAMI.2020.3005434

Han, X.-F., Jin, J. S., Wang, M.-J., Jiang, W., Gao, L., and Xiao, L. (2017). A review of algorithms for filtering the 3D point cloud. Signal. Processing: Image Commun. 57, 103–112. doi:10.1016/j.image.2017.05.009

Haskell, B. G., Puri, A., and Netravali, A. N. (1996). Digital video: an introduction to MPEG-2 . Berlin, Germany: Springer Science and Business Media .

Hirsch, R. (1999). Seizing the light: a history of photography . New York, NY: McGraw-Hill .

Ihrke, I., Restrepo, J., and Mignard-Debise, L. (2016). Principles of light field imaging: briefly revisiting 25 years of research. IEEE Signal. Process. Mag. 33 (5), 59–69. doi:10.1109/MSP.2016.2582220

Jing, L., and Tian, Y. (2020). “Self-supervised visual feature learning with deep neural networks: a survey,” IEEE transactions on pattern analysis and machine intelligence , Ithaca, NY: Cornell University .

Le Callet, P., Möller, S., and Perkis, A. (2012). Qualinet white paper on definitions of quality of experience. European network on quality of experience in multimedia systems and services (COST Action IC 1003), 3(2012) .

Le Gall, D. (1991). Mpeg: A Video Compression Standard for Multimedia Applications. Commun. ACM 34, 46–58. doi:10.1145/103085.103090

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature 521 (7553), 436–444. doi:10.1038/nature14539

Leutenegger, S., Chli, M., and Siegwart, R. Y. (2011). “BRISK: binary robust invariant scalable keypoints,” IEEE International conference on computer vision , Barcelona, Spain , 6-13 Nov, 2011 ( IEEE ), 2548–2555.

Lin, W., and Jay Kuo, C.-C. (2011). Perceptual visual quality metrics: a survey. J. Vis. Commun. image representation 22 (4), 297–312. doi:10.1016/j.jvcir.2011.01.005

Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60 (2), 91–110. doi:10.1023/b:visi.0000029664.99615.94

Lumiere, L. (1996). 1936 the lumière cinematograph. J. Smpte 105 (10), 608–611. doi:10.5594/j17187

Masia, B., Wetzstein, G., Didyk, P., and Gutierrez, D. (2013). A survey on computational displays: pushing the boundaries of optics, computation, and perception. Comput. & Graphics 37 (8), 1012–1038. doi:10.1016/j.cag.2013.10.003

Murray, N., Marchesotti, L., and Perronnin, F. (2012). “AVA: a large-scale database for aesthetic visual analysis,” IEEE conference on computer vision and pattern recognition , Providence, RI , June, 2012 . ( IEEE ), 2408–2415. doi:10.1109/CVPR.2012.6247954

Rana, A., Valenzise, G., and Dufaux, F. (2018). Learning-based tone mapping operator for efficient image matching. IEEE Trans. Multimedia 21 (1), 256–268. doi:10.1109/TMM.2018.2839885

Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011). “ORB: an efficient alternative to SIFT or SURF,” IEEE International conference on computer vision , Barcelona, Spain , November, 2011 ( IEEE ), 2564–2571. doi:10.1109/ICCV.2011.6126544

Slater, M. (2014). Grand challenges in virtual environments. Front. Robotics AI 1, 3. doi:10.3389/frobt.2014.00003

Van Engelen, J. E., and Hoos, H. H. (2020). A survey on semi-supervised learning. Mach Learn. 109 (2), 373–440. doi:10.1007/s10994-019-05855-6

Vandewalle, P., Kovacevic, J., and Vetterli, M. (2009). Reproducible research in signal processing. IEEE Signal. Process. Mag. 26 (3), 37–47. doi:10.1109/msp.2009.932122

Wallace, G. K. (1992). The JPEG still picture compression standard. IEEE Trans. Consumer Electron.Feb 38 (1), xviii-xxxiv. doi:10.1109/30.125072

Wien, M., Boyce, J. M., Stockhammer, T., and Peng, W.-H. (20192019). Standardization status of immersive video coding. IEEE J. Emerg. Sel. Top. Circuits Syst. 9 (1), 5–17. doi:10.1109/JETCAS.2019.2898948

Wu, G., Masia, B., Jarabo, A., Zhang, Y., Wang, L., Dai, Q., et al. (2017). Light field image processing: an overview. IEEE J. Sel. Top. Signal. Process. 11 (7), 926–954. doi:10.1109/JSTSP.2017.2747126

Xie, N., Ras, G., van Gerven, M., and Doran, D. (2020). Explainable deep learning: a field guide for the uninitiated , Ithaca, NY: Cornell University ..

Keywords: image processing, immersive, image analysis, image understanding, deep learning, video processing

Citation: Dufaux F (2021) Grand Challenges in Image Processing. Front. Sig. Proc. 1:675547. doi: 10.3389/frsip.2021.675547

Received: 03 March 2021; Accepted: 10 March 2021; Published: 12 April 2021.

Reviewed and Edited by:

Copyright © 2021 Dufaux. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Frédéric Dufaux, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Bentham Open Access

Logo of benthamopen

Viewpoints on Medical Image Processing: From Science to Application

Thomas m. deserno (né lehmann).

1 Department of Medical Informatics, Uniklinik RWTH Aachen, Germany;

Heinz Handels

2 Institute of Medical Informatics, University of Lübeck, Germany;

Klaus H. Maier-Hein (né Fritzsche)

3 Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany;

Sven Mersmann

4 Medical and Biological Informatics, Junior Group Computer-assisted Interventions, German Cancer Research Center, Heidelberg, Germany;

Christoph Palm

5 Regensburg – Medical Image Computing (Re-MIC), Faculty of Computer Science and Mathematics, Regensburg University of Applied Sciences, Regensburg, Germany;

Thomas Tolxdorff

6 Institute of Medical Informatics, Charité - Universitätsmedizin Berlin, Germany;

Gudrun Wagenknecht

7 Electronic Systems (ZEA-2), Central Institute of Engineering, Electronics and Analytics, Forschungszentrum Jülich GmbH, Germany;

Thomas Wittenberg

8 Image Processing & Biomedical Engineering Department, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

1.  INTRODUCTION

Current advances in medical imaging are made in fields such as instrumentation, diagnostics, and therapeutic applications and most of them are based on imaging technology and image processing. In fact, medical image processing has been established as a core field of innovation in modern health care [ 1 ] combining medical informatics, neuro-informatics and bioinformatics [ 2 ].

In 1984, the Society of Photo-Optical Instrumentation Engineers (SPIE) has launched a multi-track conference on medical imaging, which still is considered as the core event for innovation in the field [Methods]. Analogously in Germany, the workshop “Bildverarbeitung für die Medizin (BVM)” (Image Processing for Medicine) has recently celebrated its 20 th annual performance. The meeting has evolved over the years to a multi-track conference on international standard [ 3 , 4 , 5 , 6 , 7 , 8 , 9 ].

Nonetheless, it is hard to name the most important and innovative trends within this broad field ranging from image acquisition using novel imaging modalities to information extraction in diagnostics and treatment. Ritter et al. recently emphasized on the following aspects: (i) enhancement, (ii) segmentation, (iii) registration, (iv) quantification, (v) visualization, and (vi) computer-aided detection (CAD) [ 10 ].

Another concept of structuring is here referred to as the “from-to” approach. For instance,

  • From nano to macro : Co-founded in 2002 by Michael Unser of EPFL, Switzerland, The Institute of Electrical and Electronics Engineers (IEEE) has launched an international symposium on biomedical imaging (ISBI). This conference is focused in the motto from nano to macro covering all aspects of medical imaging from sub-cellular to the organ level.
  • From production to sharing : Another “from-to” migration is seen in the shift from acquisition to communication [ 11 ]. Clark et al. expected advances in the medical imaging fields along the following four axes: (i) image production and new modalities; (ii) image processing, visualization, and system simulation; (iii) image management and retrieval; and (iv) image communication and telemedicine.
  • From kilobyte to terabyte : Deserno et al. identified another “from-to” migration, which is seen in the amount of data that is produced by medical imagery [ 12 ]. Today, High-resolution CT reconstructs images with 8000 x 8000 pixels per slice with 0.7 μm isotropic detail detectability, and whole body scans with this resolution reach several Gigabytes (GB) of data load. Also, microscopic whole-slide scanning systems can easily provide so-called virtual slices in the rage of 30.000 x 50.000 pixels, which equals 16.8 GB on 10 bit gray scale.
  • From science to application : Finally, in this paper, we aim at analyzing recent advantages in medical imaging on another level. The focus is to identify core fields fostering transfer of algorithms into clinical use and addressing gaps still remaining to be bridged in future research.

The remainder of this review is organized as follows. In Section 3, we briefly analyze the history of the German workshop BVM. More than 15 years of proceedings are currently available and statistics is applied to identify trends in content of conference papers. Section 4 then provides personal viewpoints to challenging and pioneering fields. The results are discussed in Section 5.

2.  THE GERMAN HISTORY FROM SCIENCE TO APPLICATION

Since 1994, annual proceedings of the presented contributions from the BVM workshops have been published, which are available electronically in postscript (PS) or the portable document format (PDF) from 1996. Disregarding the type of presentation (oral, poster, or software demonstration), the authors are allowed to submit papers with a length of up to five pages. In 2012 the length was increased to six pages. Both, English and German papers are allowed. The number of English contributions increased steadily over the years, and reached about 50% in 2008 [ 8 ].

In order to analyze the content of the on average 124k words long proceedings regarding the most relevant topics that were discussed on the BVM workshops, the incidence of the most frequent words has been assessed for each proceeding from 1996 until 2012. From this investigation, about 300 common words of the German and English language (e.g. and / und, etc.) have been excluded. (Fig. ​ 1 1 ) presents a word cloud computed from the 100 most frequent terms used in the proceedings of the 2012 BVM workshop. The font sizes of the words refer to their counted frequency in the text.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F1.jpg

Word cloud representing the most frequent 100 terms counted from the 469 page long BVM proceedings 2012 [13].

It can be seen, in 2012, “image” was the most frequent word occurring in the BVM proceedings (920 incidences), as also observed in all the other years (1996-2012: 10,123 incidences). Together with terms like “reconstruction”, “analysis”, or “processing”, medical imaging is clearly recognizable as the major subject of the BVM workshops.

Concerning the scientific direction of the BVM meeting over time, terms such as “segmentation”, “registration”, and “navigation”, which indicate image processing procedures relevant for clinical applications, have been used with increasing frequencies (Fig. ​ 2 2 , left). The same holds for terms like “evaluation” or “experiment”, which are related to the validation of the contributions (Fig. ​ 2 2 , middle), constituting a first step towards the transition of the scientific results into a clinical application. (Fig. ​ 2 2 right) shows the occurrence of the words “patient” and “application” in the contributed papers of the BVM workshops between 1996 and 2012. Here, rather constant numbers of occurrences are found indicating a stringent focus on clinical applications.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F2.jpg

Trends from BVM workshop proceedings from important terms of processing procedures (left), experimental verification (middle), and application to humans (right).

3.  VIEWPOINTS FROM SCIENCE TO APPLICATION

3.1. multi-modal image processing for imaging and diagnosis.

Multi-modal imaging refers to (i) different measurements at a single tomographic system (e.g., MRI and functional MRI), (ii) measurements at different tomographic systems (e.g., computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT)), and (iii) measurements at integrated tomographic systems (PET/CT, PET/MR). Hence, multi-modal tomography has become increasingly popular in clinical and preclinical applications (Fig. ​ 3 3 ) providing images of morphology and function (Fig. ​ 4 4 ).

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F3.jpg

PubMed cited papers for search “multimodal AND (imaging OR tomography OR image)”.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F4.jpg

Morphological and functional imaging in clinical and pre-clinical applications.

Multi-modal image processing for enhancing multi-modal imaging procedures primarily deals with image reconstruction and artifact reduction. Examples are the integration of additional information about tissue types from MRI as an anatomical prior to the iterative reconstruction of PET images [ 14 ] and the CT- or MR-based correction of attenuation artifacts in PET, respectively, which is an essential prerequisite for quantitative PET analysis [ 15 , 16 ]. Since these algorithms are part of the imaging workflow, only highly automated, fast, and robust algorithms providing adequate accuracy are appropriate solutions. Accordingly, the whole image in the different modalities must be considered.

This requirement differs for multi-modal diagnostic approaches. In most applications, a single organ or parts of an organ are of interest. Anatomical and particularly pathological regions often show a high variability due to structure, deformation, or movement, which is difficult to predict and is thus a great challenge for image processing. In multi-modality applications, images represent complementary information often obtained at different time-scales introducing additional complexity for algorithms. Other inequalities are introduced by the different resolutions and fields of view showing the organ of interest in different degrees of completeness. From a scientific and thus algorithmic point of view, image processing methods for multi-modal images must meet higher requirements than those applied to single-modality images.

Looking exemplarily at segmentation as one of the most complex and demanding problems in medical image processing, the modality showing anatomical and pathological structures in high resolution and contrast (e.g., MRI, CT) is typically used to segment the structure or volume of interest (VOI) to subsequently analyze other properties such as function within these target structures. Here, the different resolutions have to be regarded to correct for partial volume effects in the functional modality (e.g., PET, SPECT). Since the structures to be analyzed are dependent on the disease of the actual patient examined, automatic segmentation approaches are appropriate solutions if the anatomical structures of interest are known beforehand [ 17 ], while semi-automatic approaches are advantageous if flexibility is needed [ 18 , 19 ].

Transferring research into diagnostic application software requires a graphical user interface (GUI) to parameterize the algorithms, 2D and 3D visualization of multi-modal images and segmentation results, and tools to interact with the visualized images during the segmentation procedure. The Medical Interaction Toolkit [ 20 ] or the MevisLab [ 21 ] provide the developer with frameworks for multi-modal visualization, interaction and tools to build appropriate GUIs, yielding an interface to integrate new algorithms from science to application.

Another important aspect transferring algorithms from pure academics to clinical practice is evaluation. Phantoms can be used for evaluating specific properties of an algorithm, but not for evaluating the real situation with all its uncertainties and variability. Thus, the most important step of migrating is extensive testing of algorithms on large amounts of real clinical data, which is a great challenge particularly for multi-modal approaches, and should in future be more supported by publicly available databases.

3.2. Analysis of Diffusion Weighted Images

Due to its sensitivity to micro-structural changes in white matter, diffusion weighted imaging (DWI) is of particular interest to brain research. Stroke is the most common and well known clinical application of DWI, where the images allow the non-invasive detection of ischemia within minutes of onset and are sensitive and relatively specific in detecting changes triggered by strokes [ 22 ]. The technique has also allowed deeper insights into the pathogenesis of Alzheimer’s disease, Parkinson disease, autism spectrum disorder, schizophrenia, and many other psychiatric and non-psychiatric brain diseases. DWI is also applied in the imaging of (mild) traumatic brain injury, where conventional techniques lack sensitivity to detect the subtle changes occurring in the brain. Here, studies on sports-related traumata in the younger population have raised considerable debates in the recent past [ 23 ].

Methodologically, recent advances in the generation and analysis of large-scale networks on basis of DWI are particularly exciting and promise new dimensions in quantitative neuro-imaging via the application of the profound set of tools available in graph theory to brain image analysis [ 24 ]. DWI sheds light on the living brain network architecture, revealing the organization of fiber connections together with their development and change in disease.

Big challenges remain to be solved though: Despite many years of methodological development in DWI post-processing, the field still seems to be in its infancy. The reliable tractography-based reconstruction of known or pathological anatomy is still not solved. Current reconstruction challenges at the 2011 and 2012 annual meetings of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have demonstrated the lack of methods that can reliably reconstruct large and well-known structures like the cortico-spinal tract in datasets of clinical quality [ 25 ]. Missing reference-based evaluation techniques hinder the well-founded demonstration of the real advantages of novel tractography algorithms over previous methods [ 26 ]. The mentioned limitations have obscured a broader application of DWI tractography, e.g. in surgical guidance. Even though the application of DWI e.g. in surgical resection has shown to facilitate the identification of risk structures [ 27 ], the widespread use of these techniques in surgical practice remains limited mainly by the lack of robust and standardized methods that can be applied multi-centered across institutions and comprehensive evaluation of these algorithms.

However, there are numerous applications of DWI in cancer imaging, which bridge imaging science and clinical application. The imaging modality has shown potential in the detection, staging and characterization of tumors (Fig. ​ 5 5 ), the evaluation of therapy response, or even in the prediction of therapy outcome [ 28 ]. DWI was also applied in the detection and characterization of lesions in the abdomen and the pelvis, where increased cellularity of malignant tissue leads to restricted diffusion when compared to the surrounding tissue [ 29 ]. The challenge here again will be the establishment of reliable sequences and post-processing methods for the wide-spread and multi-centric application of the techniques in the future.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F5.jpg

Depiction of fiber tracts in the vicinity of a grade IV glioblastoma. The volumetric tracking result (yellow) was overlaid on an axial T2-FLAIR image. Red and green arrows indicate the necrotic tumor core and peritumoral hyperintensity, respectively. In the frontal parts, fiber tracts are still depicted, whereas in the dorsal part, tracts seem to be either displaced or destructed by the tumor.

3.3. Model-Based Image Analysis

As already emphasized in the previous viewpoints, there is a big gap between the state of the art in current research and methods available in clinical application, especially in the field of medical image analysis [ 30 ]. Segmentation of relevant image structures (tissues, tumors, vessels etc.) is still one of the key problems in medical image computing lacking robust and automatic methods. The application of pure data-driven approaches like thresholding, region growing, edge detection, or enhanced data-driven methods like watershed algorithms, Markov random field (MRF)-based approaches, or graph cuts often leads to weak segmentations due to low contrasts between neighboring image objects, image artifacts, noise, partial volume effects etc.

Model-based segmentation integrates a-priori knowledge of the shapes and appearance of relevant structures into the segmentation process. For example, the local shape of a vessel can be characterized by the vesselness operator [ 31 ], which generates images with an enhanced representation of vessels. Using the vesselness information in combination with the original grey value image segmentation of vessels can be improved significantly and especially the segmentation of a small vessel becomes possible (e.g. [ 32 ]).

In statistical or active shape and appearance models [ 33 , 34 ], shape variability in organ distribution among individuals and characteristic gray value distributions in the neighborhood of the organ can be represented. In these approaches, a set of segmented image data is used to train active shape and active appearance models, which include information about the mean shape and shape variations as well as characteristic gray value distributions and their variation in the population represented in the training data set. Instead of direct point-to-point correspondences that are used during the generation of classical statistical shape models, Hufnagel et al. have suggested probabilistic point-to-point correspondences [ 35 ]. This approach takes into account that often inaccuracies are unavoidable by the definition of direct point correspondences between organs of different persons. In probabilistic statistical shape models, these correspondence uncertainties are respected explicitly to improve the robustness and accuracy of shape modeling and model-based segmentation. Integrated in an energy minimizing level set framework, the probabilistic statistical shape models can be used for enhanced organ segmentation [ 36 ].

In contrast thereto, atlas-based segmentation methods (e.g., [ 37 ]) realize a case-based approach and make use of the segmentation information contained in a single segmented data set, which is transferred to an unseen patient image data set. The transfer of the atlas segmentation to the patient segmentation is done by inter-individual non-linear registration methods. Multi-atlas segmentation methods using several atlases have been proposed (e.g. [ 38 ]) and show an improved accuracy and robustness in comparison to single atlas segmentation methods. Hence, multi-atlas approaches are currently in the focus of further research [ 39 , 40 ].

In future, more task-oriented systems integrated into diagnostic processes, intervention planning, therapy and follow-up are needed. In the field of image analysis, due the limited time of the physicians, automatic procedures are of special interest to segment and extract quantitative object parameters in an accurate, reproducible and robust way. Furthermore, intelligent and easy-to-use methods for fast correction of unavoidable segmentation errors are needed.

3.4. Registration of Section Images

Imaging techniques such as histology [ 41 ] or auto-radiography [ 42 ] are based on thin post-mortem sections. In comparison to in-vivo imaging, e.g. positron emission tomography (PET), magnetic resonance imaging (MRI), or DWI (as addressed in the previous viewpoint, cf. Section 4.1), several properties are considered advantageous. For instance, tissue can be processed after sectioning to enhance contrast (e.g. staining) [ 43 ], to mark specific properties like receptors [ 44 ] or to apply laser ablation studying the spatial element distribution [ 45 ]; tissue can be scanned in high-resolution [ 43 ]; and tissue is thin enough to allow optical light transmission imaging, e.g. polarized light imaging (PLI) [ 46 ]. Therefore, section imaging results in high space-resolved and high-contrasted data, which supports findings such as cytoarchitectonic boundaries [ 47 ], neuronal fiber directions [ 48 ], and receptor or element distributions [ 45 ].

Restacking of 2D sections into a 3D volume followed by the fusion of this stack with an in-vivo volume is the challenging task of medical image processing on the track from science to application. The 3D section stacks then serve as an atlas for a large variety of applications. Sections are non-linearly deformed during cutting and post-processing. Additionally, discontinuous artifacts like tears or enrolled tissue hamper the correspondence of true structure and tissue imaged.

The so-called “problem of the digitized banana” [ 41 ] prohibits the section-by-section registration without 3D reference. Smoothness of registered stacks is not equivalent to consistency and correctness. Whereas the deformations are section-specific, the orientation of the sections in comparison to the 3D structure depends on the cutting direction and, thus, is the same for all sections. In this tangled situation the question rises, if it is better to (i) restack the sections first, register the whole stack afterwards and correct for deformations at last (volume-first approach) or (ii) to register each section individually to the 3D reference volume while correcting deformations at the same time (section-first approach). Both approaches combine

  • Multi-modal registration : The need of a 3D reference and the application to correlate high-resolution section imaging findings with in-vivo imaging are sometimes solved at the same time. If possible, the 3D in-vivo modality itself is used as a reference.

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F6.jpg

Characteristic flow chart of volume-first approach and volume generation with (gray boxes) or without blockface images as intermediate reference modality (Column I). Either the in-vivo volume is post-processed to generate a pseudo-high-resolution volume with propagated section gaps (Column II) or the section volume is post-processed to get a low-resolution stack with filled gaps (Column III) [42].

Due to the variety of difficulties, missing evaluation possibilities and section specifics like post-processing, embedding, cutting procedure and tissue type there is not just one best approach to come from 2D to 3D. But careful work in this field is paid off by cutting edge applications. Not least within the European flagship, The Human Brain Project (HBP), further research in this area of medical image processing is demanded. The state-of-the-art review of HBP states in the context of human brain mapping: “What is missing to date is an integrated open source tool providing a standard application programming interface (API) for data registration and coordinate transformations and guaranteeing multi-scale and multi-modal data accuracy” [ 49 ]. Such a tool will narrow the gap from science to application.

3.5. From Images to Information in Digital Endoscopy

Basic endoscopic technologies and their routine applications (Fig. ​ 7 7 , bottom layers) still are purely data-oriented, as the complete image analysis and interpretation is performed solely by the physician. If content of endoscopic imagery is analyzed automatically, several new application scenarios for diagnostics and intervention with increasing complexity can be identified (Fig. ​ 7 7 , upper layers). As these new possibilities of endoscopy are inherently coupled with the use of computers, these new endoscopic methods and applications can be referred to as computer-integrated endoscopy [ 50 ]. Information, however, is referred to on the highest of the five levels of semantics (Fig. ​ 7 7 ):

An external file that holds a picture, illustration, etc.
Object name is CMIR-9-79_F7.jpg

Modules to build computer-integrated endoscopy, which enables information gain from image data.

  • 1. Acquisition : Advancements in diagnostic endoscopy were obtained by glass fibers for the transmission of electric light into and image information out of the body. Besides the pure wire-bound transmission of endoscopic imagery, in the past 10 years wireless broadcast came available for gastroscopic video data captured from capsule endoscopes [ 51 ].
  • 2. Transportation : Based on digital technologies, essential basic processes of endoscopic still image and image sequence capturing, storage, archiving, documentation, annotation and transmission have been simplified. These developments have initially led to the possibilities for tele-diagnosis and tele-consultations in diagnostic endoscopy, where the image data is shared using local networks or the internet [ 52 ].
  • 3. Enhancement : Methods and applications for image enhancement include intelligent removal of honey-comb patterns in fiberscopic recordings [ 53 ], temporal filtering for the reduction of ablation smoke and moving particles [ 54 ], image rectification for gastroscopes. Additionally, besides having an increased complexity, they have to work in real time with a maximum delay of 60 milliseconds, to be acceptable for surgeons and physicians.
  • 4. Augmentation : Image processing enhances endoscopic views with additional type of information. Examples of this type are artificial working horizon, key-hole views to endoscopic panorama-images [ 55 ], 3D surfaces computed from point clouds obtained by special endoscopic imaging devices such as stereo endoscopes [ 56 ], time-of-flight endoscopes [ 57 ], or shape-from polarization approaches [ 58 ]. This level also includes the possibilities of visualization and image fusion of endoscopic views with preoperative acquired radiological imagery such as angiography or CT data [ 59 ] for better intra-operative orientation and navigation, as well as image-based tracking and navigation through tubular structures [ 60 ].
  • 5. Content : Methods of content-based image analysis consider the automated segmentation, characterization and classification of diagnostic image content. Such methods describe computer-assisted detection (CADe) [ 61 ] of lesions (such as e.g. polyps) or computer-assisted diagnostics (CADx) [ 62 ], where already detected and delineated regions are characterized and classified into, for instance, benign or malign tissue areas. Furthermore, such methods automatically identify and track surgical instruments, e.g. supporting robotic surgery approaches.

On the technical side the semantics of the extracted image contents increases from the pure image recording up to the image content analysis level. This complexity also relates to the expected time axis needed to bring these methods from science to clinical applications.

From the clinical side, the most complex methods such as automated polyp detection (CADe) are considered as most important. However, it is expected that computer-integrated endoscopy systems will increasingly enter clinical applications and as such will contribute to the quality of the patient’s healthcare.

3.6. Virtual Reality and Robotics

Virtual reality (VR) and robotics are two rapidly expanding fields with growing application in surgery. VR creates three-dimensional environments increasing the capability for sensory immersion, which provides the sensation of being present in the virtual space. Applications of VR include surgical planning, case rehearsal, and case playback, which could change the paradigm of surgical training, which is especially necessary as the regulations surrounding residencies continue to change [ 63 ]. Surgeons are enabled to practice in controlled situations with preset variables to gain experience in a wide variety of surgical scenarios [ 64 ].

With the availability of inexpensive computational power and the need for cost-effective solutions in healthcare, medical technology products are being commercialized at an increasingly rapid pace. VR is already incorporated into several emerging products for medical education, radiology, surgical planning and procedures, physical rehabilitation, disability solutions, and mental health [ 65 ]. For example, VR is helping surgeons learn invasive techniques before operating, and allowing physicians to conduct real-time remote diagnosis and treatment. Other applications of VR include the modeling of molecular structures in three dimensions as well as aiding in genetic mapping and drug synthesis.

In addition, the contribution of robotics has accelerated the replacement of many open surgical treatments with more efficient minimally invasive surgical techniques using 3D visualization techniques. Robotics provides mechanical assistance with surgical tasks, contributing greater precision and accuracy and allowing automation. Robots contain features that can augment surgical performance, for instance, by steadying a surgeon’s hand or scaling the surgeon’s hand motions [ 66 ]. Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue. Surgical robots require spatial orientation between the robotic manipulators and the human operator, which can be provided by VR environments that re-create the surgical space. This enables surgeons to perform with the advantage of mechanical assistance but without being alienated from the sights, sounds, and touch of surgery [ 67 ].

After many years of research and development, Japanese scientists recently presented an autonomous robot which is able to realize surgery within the human body [ 68 ]. They send a miniature robot inside the patient’s body, perceive what the robot saw and touched before conducting surgery by using the robot’s minute arms as though as it were the one’s of the surgeon.

While the possibilities – and the need – for medical VR and robotics are immense, approaches and solutions using new applications require diligent, cooperative efforts among technology developers, medical practitioners and medical consumers to establish where future requirements and demand will lie. Augmented and virtual reality substituting or enhancing the reality can be considered as multi-reality approaches [ 69 ], which are already available in commercial products for clinical applications.

4.  DISCUSSION

In this paper, we have analyzed the written proceedings of the German annual meeting on Medical Imaging (BVM) and presented personal viewpoints on medical image processing focusing on the transfer from science to application. Reflecting successful clinical applications and promising technologies that have been recently developed, it turned out that medical image computing has transferred from single- to multi-images, and there are several ways to combine these images:

  • Multi-modality : Figs. ​ 2 2 and ​ 3 3 have emphasized that medical image processing has been moved away from the simple 2D radiograph via 3D imaging modalities to multi-modal processing and analyzing. Successful applications that are transferrable into the clinics jointly process imagery from different modalities.
  • Multi-resolution : Here, images with different properties from the same subject and body area need alignment and comparison. Usually, this implies a multi-resolution approach, since different modalities work on different scales of resolutions.
  • Multi-scale : If data becomes large, as pointed out for digital pathology, algorithms must operate on different scales, iteratively refining the alignment from coarse-to-fine. Such algorithmic design usually is referred to as multi-scale approach.
  • Multi-subject : Models have been identified as key issue for implementing applicable image computing. Such models are used for segmentation, content understanding, and intervention planning. They are generated from a reliable set of references, usually based on several subjects.
  • Multi-atlas : Even more complex, the personal viewpoints have identified multi-atlas approaches that are nowadays addressed in research. For instance in segmentation, accuracy and robustness of algorithms are improved if they are based on multiple rather than a single atlas. Both, accuracy and robustness are essential requirements for transferring algorithms into the clinical use.
  • Multi-semantics : Based on the example of digital endoscopy, another “multi” term is introduced. Image understanding and interpretation has been defined on several levels of semantics, and successful applications in computer-integrated endoscopy are operating on several of such levels.
  • Multi-reality : Finally, our last viewpoint has addressed the augmentation of the physician’s view by means of virtual reality. Medical image computing is applied to generate and superimpose such views, which results in a multi-reality world.

Andriole, Barish, and Khorasani also have discussed issues to consider for advanced image processing in the clinical arena [ 70 ]. In completion of the collection of “multi” issues, they emphasized that radiology practices are experiencing a tremendous increase in the number of images associated with each imaging study, due to multi-slice , multi-plane and/or multi-detector 3D imaging equipment. Computer-aided detection used as a second reader or as a first-pass screener will help maintaining or perhaps improving readers' performance on such big data in terms of sensitivity and specificity.

Last not least, with all these “multies”, the computational load of algorithms again becomes an issue. Modern computers provide enormous computational power and yield a revisiting and applications of several “old” approaches, which did not find their way into the clinical use yet, just because of the processing times. However, combining many images of large sizes, processing time becomes crucial again. Scholl et al. have recently addressed this issue reviewing applications based on parallel processing and usage of graphical processors for image analysis [ 12 ]. These are seen as multi-processing methods.

In summary, medical image processing is a progressive field of research, and more and more applications are becoming part of the clinical practice. These applications are based on one or more of the “multi” concepts that we have addressed in this review. However, effects from current trends in the Medical Device Directives that increase the efforts needed for clinical trials of new medical imaging procedure, cannot be observed until today. It will hence be an interesting point to follow the trend of the translation of scientific results of future BVM workshops into clinical applications.

ACKNOWLEDGEMENTS

We would like to thank Hans-Peter Meinzer, Co-Chair of the German BVM, for his helpful suggestions and for encouraging his research fellows to contribute and hence, giving this paper a “ multi-generation ” view.

CONFLICT OF INTEREST

The author(s) confirm that this article content has no conflict of interest.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: study on image filtering -- techniques, algorithm and applications.

Abstract: Image processing is one of the most immerging and widely growing techniques making it a lively research field. Image processing is converting an image to a digital format and then doing different operations on it, such as improving the image or extracting various valuable data. Image filtering is one of the fascinating applications of image processing. Image filtering is a technique for altering the size, shape, color, depth, smoothness, and other image properties. It alters the pixels of the image to transform it into the desired form using different types of graphical editing methods through graphic design and editing software. This paper introduces various image filtering techniques and their wide applications.
Subjects: Computer Vision and Pattern Recognition (cs.CV)
classes: 68U10
 classes: I.4
Cite as: [cs.CV]
  (or [cs.CV] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

chrome icon

Showing papers on "Image processing published in 2023"

Ieee transactions on image processing.

123  citations

Image Processing On Line

13  citations

SNIS: A Signal Noise Separation-Based Network for Post-Processed Image Forgery Detection

4  citations

Deep learning-based real-world object detection and improved anomaly detection for surveillance videos

3  citations

Noncontact Sensing Techniques for AI-Aided Structural Health Monitoring: A Systematic Review

Automatic seat identification system in smart transport using iot and image processing, practical application of digital image processing in measuring concrete crack widths in field studies.

2  citations

Saliency map in image visual quality assessment and processing

Integrated diffusion image operator (idio): a pipeline for automated configuration and processing of diffusion mri data, development of complete image processing system including image filtering, image compression & image security, android-based herpes disease detection application using image processing, automated invoice data extraction using image processing, efficient object detection and classification approach using htyolov4 and m2rfo-cnn, deep and low-rank quaternion priors for color image processing, implementation of automated pipeline for resting-state fmri analysis with pacs integration, stress detection using machine learning and image processing, automated extraction of seed morphological traits from images, identification of counterfeit indian currency note using image processing and machine learning classifiers, computer vision on x-ray data in industrial production and security applications: a comprehensive survey, iot based image processing filters, comprehensive automatic processing and analysis of adaptive optics flood illumination retinal images on healthy subjects., research on super-resolution image based on deep learning, ocr-mrd: performance analysis of different optical character recognition engines for medical report digitization, joint graph attention and asymmetric convolutional neural network for deep image compression, brain tumor diagnosis using image fusion and deep learning, brain tumor diagnosis using machine learning: a review, a study of air-water flow in a narrow rectangular duct using an image processing technique, deep learning using a residual deconvolutional network enables real-time high-density single-molecule localization microscopy., improved frqi on superconducting processors and its restrictions in the nisq era, darsia: an open-source python toolbox for two-scale image processing of dynamics in porous media.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

agronomy-logo

Article Menu

research papers on applications of image processing

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Integration of remote sensing and machine learning for precision agriculture: a comprehensive perspective on applications.

research papers on applications of image processing

1. Introduction

2. remote sensing technology and the machine learning method, 2.1. remote sensing data in precision agriculture, 2.2. overview of the use of ml algorithms in precision agriculture, 3. integrated application of remote sensing technology and the machine learning method, 3.1. agricultural monitoring and identification, 3.2. stress detection of diseases and insect pests, 3.3. management and analysis of soil and land, 3.4. prediction and decision making regarding crop yield, 4. discussion, 4.1. current challenges, 4.1.1. acquisition and processing of multi-source rs data, 4.1.2. interpretability and generalization of the model, 4.2. prospects for the future, 4.2.1. trend of intelligence and automation, 4.2.2. data sharing and multidisciplinary interaction, 5. conclusions, author contributions, data availability statement, conflicts of interest.

  • Tran, T.-N.-D.; Lakshmi, V. Enhancing human resilience against climate change: Assessment of hydroclimatic extremes and sea level rise impacts on the Eastern Shore of Virginia, United States. Sci. Total Environ. 2024 , 947 , 174289. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tran, T.-N.-D.; Nguyen, B.Q.; Grodzka-Lukaszewska, M.; Sinicyn, G.; Lakshmi, V. The role of reservoirs under the impacts of climate change on the Srepok River basin, Central Highlands of Vietnam. Front. Environ. 2023 , 11 , 1304845. [ Google Scholar ] [ CrossRef ]
  • Tran, T.-N.-D.; Tapas, M.R.; Do, S.K.; Etheridge, R.; Lakshmi, V. Investigating the impacts of climate change on hydroclimatic extremes in the Tar-Pamlico River basin, North Carolina. J. Environ. Manag. 2024 , 363 , 121375. [ Google Scholar ] [ CrossRef ]
  • Tran, T.N.D.; Do, S.K.; Nguyen, B.Q.; Tran, V.N.; Grodzka-Łukaszewska, M.; Sinicyn, G.; Lakshmi, V. Investigating the Future Flood and Drought Shifts in the Transboundary Srepok River Basin Using CMIP6 Projections. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024 , 17 , 7516–7529. [ Google Scholar ] [ CrossRef ]
  • Matton, N.; Canto, G.S.; Waldner, F.; Valero, S.; Morin, D.; Inglada, J.; Arias, M.; Bontemps, S.; Koetz, B.; Defourny, P. An Automated Method for Annual Cropland Mapping along the Season for Various Globally-Distributed Agrosystems Using High Spatial and Temporal Resolution Time Series. Remote Sens. 2015 , 7 , 13208–13232. [ Google Scholar ] [ CrossRef ]
  • Alavi, M.; Albaji, M.; Golabi, M.; Ali Naseri, A.; Homayouni, S. Estimation of sugarcane evapotranspiration from remote sensing and limited meteorological variables using machine learning models. J. Hydrol. 2024 , 629 , 130605. [ Google Scholar ] [ CrossRef ]
  • Sadiq, M.A.; Sarkar, S.K.; Raisa, S.S. Meteorological drought assessment in northern Bangladesh: A machine learning-based approach considering remote sensing indices. Ecol. Indic. 2023 , 157 , 111233. [ Google Scholar ] [ CrossRef ]
  • Bellvert, J.; Mata, M.; Vallverdú, X.; Paris, C.; Marsal, J. Optimizing precision irrigation of a vineyard to improve water use efficiency and profitability by using a decision-oriented vine water consumption model. Precis. Agric. 2021 , 22 , 319–341. [ Google Scholar ] [ CrossRef ]
  • Yomo, M.; Yalo, E.N.; Gnazou, M.D.-T.; Silliman, S.; Larbi, I.; Mourad, K.A. Forecasting land use and land cover dynamics using combined remote sensing, machine learning algorithm and local perception in the Agoènyivé Plateau, Togo. Remote Sens. Appl. Soc. Environ. 2023 , 30 , 100928. [ Google Scholar ] [ CrossRef ]
  • Kumar, M.; Bhattacharya, B.K.; Pandya, M.R.; Handique, B.K. Machine learning based plot level rice lodging assessment using multi-spectral UAV remote sensing. Comput. Electron. Agric. 2024 , 219 , 108754. [ Google Scholar ] [ CrossRef ]
  • Kganyago, M.; Adjorlolo, C.; Mhangara, P.; Tsoeleng, L. Optical remote sensing of crop biophysical and biochemical parameters: An overview of advances in sensor technologies and machine learning algorithms for precision agriculture. Comput. Electron. Agric. 2024 , 218 , 108730. [ Google Scholar ] [ CrossRef ]
  • Petrović, B.; Bumbálek, R.; Zoubek, T.; Kuneš, R.; Smutný, L.; Bartoš, P. Application of precision agriculture technologies in Central Europe-review. J. Agric. Food Res. 2024 , 15 , 101048. [ Google Scholar ] [ CrossRef ]
  • Mana, A.A.; Allouhi, A.; Hamrani, A.; Rehman, S.; el Jamaoui, I.; Jayachandran, K. Sustainable AI-based production agriculture: Exploring AI applications and implications in agricultural practices. Smart Agric. Technol. 2024 , 7 , 100416. [ Google Scholar ] [ CrossRef ]
  • Brewster, C.; Roussaki, I.; Kalatzis, N.; Doolin, K.; Ellis, K. IoT in Agriculture: Designing a Europe-Wide Large-Scale Pilot. IEEE Commun. Mag. 2017 , 55 , 26–33. [ Google Scholar ] [ CrossRef ]
  • Shuai, L.; Li, Z.; Chen, Z.; Luo, D.; Mu, J. A research review on deep learning combined with hyperspectral Imaging in multiscale agricultural sensing. Comput. Electron. Agric. 2024 , 217 , 108577. [ Google Scholar ] [ CrossRef ]
  • Diaz-Gonzalez, F.A.; Vuelvas, J.; Correa, C.A.; Vallejo, V.E.; Patino, D. Machine learning and remote sensing techniques applied to estimate soil indicators. Review Ecol. Indic. 2022 , 135 , 108517. [ Google Scholar ] [ CrossRef ]
  • El-Omairi, M.A.; El Garouani, A. A review on advancements in lithological mapping utilizing machine learning algorithms and remote sensing data. Heliyon 2023 , 9 , e20168. [ Google Scholar ] [ CrossRef ]
  • Kasampalis, D.A.; Alexandridis, T.K.; Deva, C.; Challinor, A.; Moshou, D.; Zalidis, G. Contribution of Remote Sensing on Crop Models: A Review. J. Imaging 2018 , 4 , 52. [ Google Scholar ] [ CrossRef ]
  • Tran, T.-N.-D.; Nguyen, B.Q.; Zhang, R.; Aryal, A.; Grodzka-Lukaszewska, M.; Sinicyn, G.; Lakshmi, V. Quantification of Gridded Precipitation Products for the Streamflow Simulation on the Mekong River Basin Using Rainfall Assessment Framework: A Case Study for the Srepok River Subbasin, Central Highland Vietnam. Remote Sens. 2023 , 15 , 1030. [ Google Scholar ] [ CrossRef ]
  • Tran, T.-N.-D.; Le, M.-H.; Zhang, R.; Nguyen, B.Q.; Bolten, J.D.; Lakshmi, V. Robustness of gridded precipitation products for vietnam basins using the comprehensive assessment framework of rainfall. Atmos. Res. 2023 , 293 , 106923. [ Google Scholar ] [ CrossRef ]
  • Tran, T.-N.-D.; Nguyen, Q.B.; Vo, N.D.; Marshall, R.; Gourbesville, P. Assessment of Terrain Scenario Impacts on Hydrological Simulation with SWAT Model. Application to Lai Giang Catchment, Vietnam. In Advances in Hydroinformatics ; Springer: Singapore, 2022; pp. 1205–1222. [ Google Scholar ]
  • Aryal, A.; Tran, T.-N.-D.; Kumar, B.; Lakshmi, V. Evaluation of Satellite-Derived Precipitation Products for Streamflow Simulation of a Mountainous Himalayan Watershed: A Study of Myagdi Khola in Kali Gandaki Basin, Nepal. Remote Sens. 2023 , 15 , 4762. [ Google Scholar ] [ CrossRef ]
  • Mani, P.K.; Mandal, A.; Biswas, S.; Sarkar, B.; Mitran, T.; Meena, R.S. Remote Sensing and Geographic Information System: In A Tool for Precision Farming ; Mitran, T., Meena, R.S., Chakraborty, A., Eds.; Geospatial Technologies for Crops and Soils; Springer: Singapore, 2021; pp. 49–111. [ Google Scholar ]
  • Carneiro, F.M.; Filho, A.L.d.B.; Ferreira, F.M.; Junior, G.d.F.S.; Brandão, Z.N.; da Silva, R.P.; Shiratsuchi, L.S. Soil and satellite remote sensing variables importance using machine learning to predict cotton yield. Smart Agric. Technol. 2023 , 5 , 100292. [ Google Scholar ] [ CrossRef ]
  • Morlin Carneiro, F.; Angeli Furlani, C.E.; Zerbato, C.; Candida de Menezes, P.; da Silva Gírio, L.A.; Freire de Oliveira, M. Comparison between vegetation indices for detecting spatial and temporal variabilities in soybean crop using canopy sensors. Precis. Agric. 2020 , 21 , 979–1007. [ Google Scholar ] [ CrossRef ]
  • Ai, B.; Wen, Z.; Jiang, Y.C.; Gao, S.; Lv, G.N. Sea surface temperature inversion model for infrared remote sensing images based on deep neural network. Infrared Phys. Technol. 2019 , 99 , 231–239. [ Google Scholar ] [ CrossRef ]
  • Zhang, W.H.; Sun, L.; Lian, L.S.; Yang, Y.K. MODIS Aerosol Optical Depth Inversion Over Urban Areas Supported by BRDF/Albedo Products. J. Indian Soc. Remote Sens. 2020 , 48 , 1345–1354. [ Google Scholar ] [ CrossRef ]
  • Aires, F.; Pellet, V. Estimating Retrieval Errors from Neural Network Inversion Schemes—Application to the Retrieval of Temperature Profiles From IASI. IEEE Trans. Geosci. Remote Sens. 2021 , 59 , 6386–6396. [ Google Scholar ] [ CrossRef ]
  • Liu, B.; Liu, L.; Tian, L.; Cao, W.; Zhu, Y.; Asseng, S. Post-heading heat stress and yield impact in winter wheat of China. Glob. Change Biol. 2014 , 20 , 372–381. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Akter, N.; Rafiqul Islam, M. Heat stress effects and management in wheat. A review. Agron. Sustain. Dev. 2017 , 37 , 37. [ Google Scholar ] [ CrossRef ]
  • Wójtowicz, M.; Wójtowicz, A.; Piekarczyk, J. Application of remote sensing methods in agriculture. Commun. Biometry Crop Sci. 2016 , 11 , 31–50. [ Google Scholar ]
  • Skendžić, S.; Zovko, M.; Lešić, V.; Pajač Živković, I.; Lemić, D. Detection and Evaluation of Environmental Stress in Winter Wheat Using Remote and Proximal Sensing Methods and Vegetation Indices—A review. Diversity 2023 , 15 , 481. [ Google Scholar ] [ CrossRef ]
  • Kumar, A.S.; Reddy, A.M.; Srinivas, L.; Reddy, P.M. Assessment of Surface Water Quality in Hyderabad Lakes by Using Multivariate Statistical Techniques, Hyderabad-India. Environ. Pollut. 2015 , 4 , 4. [ Google Scholar ] [ CrossRef ]
  • Odermatt, D.; Danne, O.; Philipson, P.; Brockmann, C. Diversity II water quality parameters from ENVISAT (2002–2012): A new global information source for lakes. Earth Syst. Sci. Data. 2018 , 10 , 1527–1549. [ Google Scholar ] [ CrossRef ]
  • Shang, P.; Shen, F. Atmospheric Correction of Satellite GF-1/WFV Imagery and Quantitative Estimation of Suspended Particulate Matter in the Yangtze Estuary. Sensors 2016 , 16 , 1997. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014 , 92 , 79–97. [ Google Scholar ] [ CrossRef ]
  • Lee, C.-J.; Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Sung, Y.; Chen, W.-L. Single-plant broccoli growth monitoring using deep learning with UAV imagery. Comput. Electron. Agric. 2023 , 207 , 107739. [ Google Scholar ] [ CrossRef ]
  • Marques, T.; Carreira, S.; Miragaia, R.; Ramos, J.; Pereira, A. Applying deep learning to real-time UAV-based forest monitoring: Leveraging multi-sensor imagery for improved results. Expert Syst. Appl. 2024 , 245 , 123107. [ Google Scholar ] [ CrossRef ]
  • Bah, M.D.; Hafiane, A.; Canals, R. Weeds detection in UAV imagery using SLIC and the hough transform. In Proceedings of the 7th International Conference on Image Processing Theory, Tools and Applications, Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–6. [ Google Scholar ]
  • Yang, M.-D.; Huang, K.-S.; Kuo, Y.-H.; Tsai, H.P.; Lin, L.-M. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery. Remote Sens. 2017 , 9 , 583. [ Google Scholar ] [ CrossRef ]
  • Yang, Q.; She, B.; Huang, L.S.; Yang, Y.Y.; Zhang, G.; Zhang, M.; Hong, Q.; Zhang, D.Y. Extraction of soybean planting area based on feature fusion technology of multi-source low altitude unmanned aerial vehicle images. Ecol. Inform. 2022 , 70 , 101715. [ Google Scholar ] [ CrossRef ]
  • Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020 , 237 , 111599. [ Google Scholar ] [ CrossRef ]
  • Peng, J.B.; Wang, D.L.; Zhu, W.X.; Yang, T.; Liu, Z.; Rezaei, E.E.; Li, J.; Sun, Z.G.; Xin, X.P. Combination of UAV and deep learning to estimate wheat yield at ripening stage: The potential of phenotypic features. Int. J. Appl. Earth Obs. Geoinf. 2023 , 124 , 103494. [ Google Scholar ] [ CrossRef ]
  • Khan, A.; Vibhute, A.D.; Mali, S.; Patil, C.H. A systematic review on hyperspectral imaging technology with a machine and deep learning methodology for agricultural applications. Ecol. Inform. 2022 , 69 , 101678. [ Google Scholar ] [ CrossRef ]
  • Han, W.; Zhang, X.; Wang, Y.; Wang, L.; Huang, X.; Li, J.; Wang, S.; Chen, W.; Li, X.; Feng, R.; et al. A survey of machine learning and deep learning in remote sensing of geological environment: Challenges, advances, and opportunities. ISPRS J. Photogramm. Remote Sens. 2023 , 202 , 87–113. [ Google Scholar ] [ CrossRef ]
  • Coulibaly, S.; Kamsu-Foguem, B.; Kamissoko, D.; Traore, D. Deep learning for precision agriculture: A bibliometric analysis. Intelligent Syst. Appl. 2022 , 16 , 200102. [ Google Scholar ] [ CrossRef ]
  • Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine Learning in Agriculture: A review. Sensors 2018 , 18 , 2674. [ Google Scholar ] [ CrossRef ]
  • Sarkar, C.; Gupta, D.; Gupta, U.; Hazarika, B.B. Leaf disease detection using machine learning and deep learning: Review and challenges. Appl. Soft Comput. 2023 , 145 , 110534. [ Google Scholar ] [ CrossRef ]
  • Miao, Z.H.; Yu, X.Y.; Li, N.; Zhang, Z.; He, C.X.; Li, Z.; Deng, C.Y.; Sun, T. Efficient tomato harvesting robot based on image processing and deep learning. Precis. Agric. 2023 , 24 , 254–287. [ Google Scholar ] [ CrossRef ]
  • Fu, Y.; Yang, G.; Pu, R.; Li, Z.; Li, H.; Xu, X.; Song, X.; Yang, X.; Zhao, C. An overview of crop nitrogen status assessment using hyperspectral remote sensing: Current status and perspectives. Eur. J. Agron. 2021 , 124 , 126241. [ Google Scholar ] [ CrossRef ]
  • Casagli, N.; Cigna, F.; Bianchini, S.; Hölbling, D.; Füreder, P.; Righini, G.; Del Conte, S.; Friedl, B.; Schneiderbauer, S.; Iasio, C.; et al. Landslide mapping and monitoring by using radar and optical remote sensing: Examples from the EC-FP7 project SAFER. Remote Sens. Appl. Soc. Environ. 2016 , 4 , 92–108. [ Google Scholar ] [ CrossRef ]
  • Knoll, F.J.; Czymmek, V.; Poczihoski, S.; Holtorf, T.; Hussmann, S. Improving efficiency of organic farming by using a deep learning classification approach. Comput. Electron. Agric. 2018 , 153 , 347–356. [ Google Scholar ] [ CrossRef ]
  • Ouma, Y.O. Advancements in medium and high resolution Earth observation for land-surface imaging: Evolutions, future trends and contributions to sustainable development. Adv. Space Res. 2016 , 57 , 110–126. [ Google Scholar ] [ CrossRef ]
  • Sofia, G. Combining geomorphometry, feature extraction techniques and Earth-surface processes research: The way forward. Geomorphology 2020 , 355 , 107055. [ Google Scholar ] [ CrossRef ]
  • Saha, A.; Chandra Pal, S. Application of machine learning and emerging remote sensing techniques in hydrology: A state-of-the-art review and current research trends. J. Hydrol. 2024 , 632 , 130907. [ Google Scholar ] [ CrossRef ]
  • Rodi, N.S.N.; Malek, M.A.; Ismail, A.R. Monthly Rainfall Prediction Model of Peninsular Malaysia Using Clonal Selection Algorithm. Int. J. Eng. Technol. 2018 , 7 , 182–185. [ Google Scholar ] [ CrossRef ]
  • Latif, S.D.; Alyaa Binti Hazrin, N.; Hoon Koo, C.; Lin Ng, J.; Chaplot, B.; Feng Huang, Y.; El-Shafie, A.; Najah Ahmed, A. Assessing rainfall prediction models: Exploring the advantages of machine learning and remote sensing approaches. Alex. Eng. J. 2023 , 82 , 16–25. [ Google Scholar ] [ CrossRef ]
  • Khanal, S.; Fulton, J.; Shearer, S. An overview of current and potential applications of thermal remote sensing in precision agriculture. Comput. Electron. Agric. 2017 , 139 , 22–32. [ Google Scholar ] [ CrossRef ]
  • Ahmed, Z.; Shew, A.; Nalley, L.; Popp, M.; Green, V.S.; Brye, K. An examination of thematic research, development, and trends in remote sensing applied to conservation agriculture. Int. Soil Water Conserv. Res. 2024 , 12 , 77–95. [ Google Scholar ] [ CrossRef ]
  • Jafarbiglu, H.; Pourreza, A. A comprehensive review of remote sensing platforms, sensors, and applications in nut crops. Comput. Electron. Agric. 2022 , 197 , 106844. [ Google Scholar ] [ CrossRef ]
  • Degerickx, J.; Roberts, D.A.; McFadden, J.P.; Hermy, M.; Somers, B. Urban tree health assessment using airborne hyperspectral and LiDAR imagery. Int. J. Appl. Earth Obs. Geoinf. 2018 , 73 , 26–38. [ Google Scholar ] [ CrossRef ]
  • Duan, M.; Wang, Z.; Sun, L.; Liu, Y.; Yang, P. Monitoring apple flowering date at 10 m spatial resolution based on crop reference curves. Comput. Electron. Agric. 2024 , 225 , 109260. [ Google Scholar ] [ CrossRef ]
  • Meng, R.; Gao, R.; Zhao, F.; Huang, C.; Sun, R.; Lv, Z.; Huang, Z. Landsat-based monitoring of southern pine beetle infestation severity and severity change in a temperate mixed forest. Remote Sens. Environ. 2022 , 269 , 112847. [ Google Scholar ] [ CrossRef ]
  • Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021 , 486 , 118986. [ Google Scholar ] [ CrossRef ]
  • Zhu, X.; Wang, R.; Shi, W.; Yu, Q.; Li, X.; Chen, X. Automatic Detection and Classification of Dead Nematode-Infested Pine Wood in Stages Based on YOLO v4 and GoogLeNet. Forests 2023 , 14 , 601. [ Google Scholar ] [ CrossRef ]
  • Luo, Y.; Huang, H.; Roques, A. Early Monitoring of Forest Wood-Boring Pests with Remote Sensing. Annu. Rev. Entomol. 2023 , 68 , 277–298. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ren, S.; Chen, H.; Hou, J.; Zhao, P.; Dong Qg Feng, H. Based on historical weather data to predict summer field-scale maize yield: Assimilation of remote sensing data to WOFOST model by ensemble Kalman filter algorithm. Comput. Electron. Agric. 2024 , 219 , 108822. [ Google Scholar ] [ CrossRef ]
  • Guerrero, N.M.; Aparicio, J.; Valero-Carreras, D. Combining Data Envelopment Analysis and Machine Learning. Mathematics 2022 , 10 , 909. [ Google Scholar ] [ CrossRef ]
  • Sharma, A.; Jain, A.; Gupta, P.; Chowdary, V. Machine Learning Applications for Precision Agriculture: A Comprehensive Review. IEEE Access 2021 , 9 , 4843–4873. [ Google Scholar ] [ CrossRef ]
  • Behmann, J.; Mahlein, A.K.; Rumpf, T.; Römer, C.; Plümer, L. A review of advanced machine learning methods for the detection of biotic stress in precision crop protection. Precis. Agric. 2015 , 16 , 239–260. [ Google Scholar ] [ CrossRef ]
  • Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnuta, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Curr. Rev. Musculoskelet. Med. 2020 , 13 , 69–76. [ Google Scholar ] [ CrossRef ]
  • Gao, Z.; Luo, Z.; Zhang, W.; Lv, Z.; Xu, Y. Deep Learning Application in Plant Stress Imaging: A Review. AgriEngineering 2020 , 2 , 430–446. [ Google Scholar ] [ CrossRef ]
  • Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors 2021 , 21 , 3758. [ Google Scholar ] [ CrossRef ]
  • Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl. Vis. Sci. Technol. 2020 , 9 , 14. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Simeone, O. A Very Brief Introduction to Machine Learning with Applications to Communication Systems. IEEE Trans. Cogn. Commun. Netw. 2018 , 4 , 648–664. [ Google Scholar ] [ CrossRef ]
  • Albarakati, H.M.; Khan, M.A.; Hamza, A.; Khan, F.; Kraiem, N.; Jamel, L.; Almuqren, L.; Alroobaea, R. A Novel Deep Learning Architecture for Agriculture Land Cover and Land Use Classification from Remote Sensing Images Based on Network-Level Fusion of Self-Attention Architecture. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024 , 17 , 6338–6353. [ Google Scholar ] [ CrossRef ]
  • Finley, A.O.; Andersen, H.E.; Babcock, C.; Cook, B.D.; Morton, D.C.; Banerjee, S. Models to Support Forest Inventory and Small Area Estimation Using Sparsely Sampled LiDAR: A Case Study Involving G-LiHT LiDAR in Tanana, Alaska. J. Agric. Biol. Environ. Stat. 2024 , 28 . [ Google Scholar ] [ CrossRef ]
  • Shafik, W.; Tufail, A.; Namoun, A.; De Silva, L.C.; Apong, R. A Systematic Literature Review on Plant Disease Detection: Motivations, Classification Techniques, Datasets, Challenges, and Future Trends. IEEE Access 2023 , 11 , 59174–59203. [ Google Scholar ] [ CrossRef ]
  • El Akhal, H.; Ben Yahya, A.; Moussa, N.; El Alaouil, A.E. A novel approach for image-based olive leaf diseases classification using a deep hybrid model. Ecol. Inform. 2023 , 77 , 102276. [ Google Scholar ] [ CrossRef ]
  • Abbas, F.; Afzaal, H.; Farooque, A.A.; Tang, S. Crop Yield Prediction through Proximal Sensing and Machine Learning Algorithms. Agronomy 2020 , 10 , 1046. [ Google Scholar ] [ CrossRef ]
  • Fu, Z.P.; Jiang, J.; Gao, Y.; Krienke, B.; Wang, M.; Zhong, K.T.; Cao, Q.; Tian, Y.C.; Zhu, Y.; Cao, W.X.; et al. Wheat Growth Monitoring and Yield Estimation based on Multi-Rotor Unmanned Aerial Vehicle. Remote Sens. 2020 , 12 , 508. [ Google Scholar ] [ CrossRef ]
  • Guo, H.L.; Zhang, R.R.; Dai, W.H.; Zhou, X.W.; Zhang, D.J.; Yang, Y.H.; Cui, J. Mapping Soil Organic Matter Content Based on Feature Band Selection with ZY1-02D Hyperspectral Satellite Data in the Agricultural Region. Agronomy 2022 , 12 , 2111. [ Google Scholar ] [ CrossRef ]
  • Erler, A.; Riebe, D.; Beitz, T.; Löhmannsröben, H.G.; Gebbers, R. Soil Nutrient Detection for Precision Agriculture Using Handheld Laser-Induced Breakdown Spectroscopy (LIBS) and Multivariate Regression Methods (PLSR, Lasso and GPR). Sensors 2020 , 20 , 418. [ Google Scholar ] [ CrossRef ]
  • Yoon, H.I.; Lee, H.; Yang, J.S.; Choi, J.H.; Jung, D.H.; Park, Y.J.; Park, J.E.; Kim, S.M.; Park, S.H. Predicting Models for Plant Metabolites Based on PLSR, AdaBoost, XGBoost, and LightGBM Algorithms Using Hyperspectral Imaging Brassica juncea . Agriculture 2023 , 13 , 1477. [ Google Scholar ] [ CrossRef ]
  • Bakhshipour, A. Cascading Feature Filtering and Boosting Algorithm for Plant Type Classification Based on Image Features. IEEE Access 2021 , 9 , 82021–82030. [ Google Scholar ] [ CrossRef ]
  • Luo, L.L.; Chang, Q.R.; Wang, Q.; Huang, Y. Identification and Severity Monitoring of Maize Dwarf Mosaic Virus Infection Based on Hyperspectral Measurements. Remote Sens. 2021 , 13 , 4560. [ Google Scholar ] [ CrossRef ]
  • Shinde, S.; Patidar, H. Hyperspectral Image Classification for Vegetation Detection Using Lightweight Cascaded Deep Convolutional Neural Network. J. Indian Soc. Remote Sens. 2023 , 51 , 2159–2166. [ Google Scholar ] [ CrossRef ]
  • Barbedo, J.G.A.; Koenigkan, L.V.; Santos, P.M.; Ribeiro, A.R.B. Counting Cattle in UAV Images—Dealing with Clustered Animals and Animal/Background Contrast Changes. Sensors 2020 , 20 , 2126. [ Google Scholar ] [ CrossRef ]
  • Han, T.; Hu, X.M.; Zhang, J.; Xue, W.H.; Che, Y.F.; Deng, X.Q.; Zhou, L.H. Rebuilding high-quality near-surface ozone data based on the combination of WRF-Chem model with a machine learning method to better estimate its impact on crop yields in the Beijing-Tianjin-Hebei region from 2014 to 2019. Environ. Pollut. 2023 , 336 , 122334. [ Google Scholar ] [ CrossRef ]
  • Gauci, A.; Abela, J.; Austad, M.; Cassar, L.F.; Zarb Adami, K. A Machine Learning approach for automatic land cover mapping from DSLR images over the Maltese Islands. Environ. Model. Softw. 2018 , 99 , 1–10. [ Google Scholar ] [ CrossRef ]
  • Idol, T.; Haack, B.; Mahabir, R. Radar speckle reduction and derived texture measures for land cover/use classification: A case study. Geocarto Int. 2017 , 32 , 18–29. [ Google Scholar ] [ CrossRef ]
  • Li, L.; Dong, Y.Y.; Xiao, Y.X.; Liu, L.Y.; Zhao, X.; Huang, W.J. Combining Disease Mechanism and Machine Learning to Predict Wheat Fusarium Head Blight. Remote Sens. 2022 , 14 , 2732. [ Google Scholar ] [ CrossRef ]
  • Bebie, M.; Cavalaris, C.; Kyparissis, A. Assessing Durum Wheat Yield through Sentinel-2 Imagery: A Machine Learning Approach. Remote Sens. 2022 , 14 , 3880. [ Google Scholar ] [ CrossRef ]
  • Zhou, Y.N.; Luo, J.C.; Feng, L.; Yang, Y.P.; Chen, Y.H.; Wu, W. Long-short-term-memory-based crop classification using high-resolution optical images and multi-temporal SAR data. GISci. Remote Sens. 2019 , 56 , 1170–1191. [ Google Scholar ] [ CrossRef ]
  • Jimenez, A.F.; Ortiz, B.V.; Bondesan, L.; Morata, G.; Damianidis, D. Long Short-Term Memory Neural Network for irrigation management: A case study from Southern Alabama, USA. Precis. Agric. 2021 , 22 , 475–492. [ Google Scholar ] [ CrossRef ]
  • Chen, C.; Bao, Y.X.; Zhu, F.; Yang, R.M. Remote sensing monitoring of rice growth under Cnaphalocrocis medinalis (Guenée) damage by integrating satellite and UAV remote sensing data. Int. J. Remote Sens. 2024 , 45 , 772–790. [ Google Scholar ] [ CrossRef ]
  • Dumdumaya, C.E.; Cabrera, J.S. Determination of future land use changes using remote sensing imagery and artificial neural network algorithm: A case study of Davao City, Philippines. Artif. Intell. Geosci. 2023 , 4 , 111–118. [ Google Scholar ] [ CrossRef ]
  • Bao Pham, Q.; Ajim Ali, S.; Parvin, F.; Van On, V.; Mohd Sidek, L.; Đurin, B.; Cetl, V.; Šamanović, S.; Nguyet Minh, N. Multi-spectral remote sensing and GIS-based analysis for decadal land use land cover changes and future prediction using random forest tree and artificial neural network. Adv. Space Res. 2024 , 10 , 29900–29926. [ Google Scholar ] [ CrossRef ]
  • Zhang, J.; Zhang, Y.; Zhou, T.; Sun, Y.; Yang, Z.; Zheng, S. Research on the identification of land types and tree species in the Engebei ecological demonstration area based on GF-1 remote sensing. Ecol. Inform. 2023 , 77 , 102242. [ Google Scholar ] [ CrossRef ]
  • Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sens. 2016 , 114 , 24–31. [ Google Scholar ] [ CrossRef ]
  • Whyte, A.; Ferentinos, K.P.; Petropoulos, G.P. A new synergistic approach for monitoring wetlands using Sentinels -1 and 2 data with object-based machine learning algorithms. Environ. Model. Softw. 2018 , 104 , 40–54. [ Google Scholar ] [ CrossRef ]
  • Ali, M.Z.; Qazi, W.; Aslam, N. A comparative study of ALOS-2 PALSAR and landsat-8 imagery for land cover classification using maximum likelihood classifier. Egypt J. Remote Sens. Space Sci. 2018 , 21 , S29–S35. [ Google Scholar ] [ CrossRef ]
  • Ghayour, L.; Neshat, A.; Paryani, S.; Shahabi, H.; Shirzadi, A.; Chen, W.; Al-Ansari, N.; Geertsema, M.; Pourmehdi Amiri, M.; Gholamnia, M.; et al. Performance Evaluation of Sentinel-2 and Landsat 8 OLI Data for Land Cover/Use Classification Using a Comparison between Machine Learning Algorithms. Remote Sens. 2021 , 13 , 1349. [ Google Scholar ] [ CrossRef ]
  • Nguyen, T.T.; Ngo, H.H.; Guo, W.S.; Chang, S.W.; Nguyen, D.D.; Nguyen, C.T.; Zhang, J.; Liang, S.; Bui, X.T.; Hoang, N.B. A low-cost approach for soil moisture prediction using multi-sensor data and machine learning algorithm. Sci. Total Environ. 2022 , 833 , 12–155066. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Liu, Y.; Sun, Q.; Huang, J.; Feng, H.K.; Wang, J.J.; Yang, G.J. Estimation of Potato Above Ground Biomass Based on UAV Multispectral Images. Spectrosc. Spectr. Anal. 2021 , 41 , 2549–2555. [ Google Scholar ]
  • Li, Z.P.; Zhou, X.G.; Cheng, Q.; Fei, S.P.; Chen, Z. A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat. Remote Sens. 2023 , 15 , 2152. [ Google Scholar ] [ CrossRef ]
  • Pejak, B.; Lugonja, P.; Antic, A.; Panic, M.; Pandzic, M.; Alexakis, E.; Mavrepis, P.; Zhou, N.A.; Marko, O.; Crnojevic, V. Soya Yield Prediction on a Within-Field Scale Using Machine Learning Models Trained on Sentinel-2 and Soil Data. Remote Sens. 2022 , 14 , 2256. [ Google Scholar ] [ CrossRef ]
  • Ye, Y.; Huang, Q.Q.; Rong, Y.; Yu, X.H.; Liang, W.J.; Chen, Y.X.; Xiong, S.W. Field detection of small pests through stochastic gradient descent with genetic algorithm. Comput. Electron. Agric. 2023 , 206 , 107694. [ Google Scholar ] [ CrossRef ]
  • Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; El Mohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023 , 7 , 382. [ Google Scholar ] [ CrossRef ]
  • Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Alam, M. A novel semi-supervised framework for UAV based crop/weed classification. PLoS ONE 2021 , 16 , e0251008. [ Google Scholar ] [ CrossRef ]
  • Mujkic, E.; Philipsen, M.P.; Moeslund, T.B.; Christiansen, M.P.; Ravn, O. Anomaly Detection for Agricultural Vehicles Using Autoencoders. Sensors 2022 , 22 , 3608. [ Google Scholar ] [ CrossRef ]
  • Chen, X.; Zhang, C.; Yan, K.; Wei, Z.; Cheng, N. Risk Assessment of Agricultural Soil Heavy Metal Pollution Under the Hybrid Intelligent Evaluation Model. IEEE Access 2023 , 11 , 106847–106858. [ Google Scholar ] [ CrossRef ]
  • Alvarenga, T.C.; De Lima, R.R.; Simao, S.D.; Brandao Junior, L.C.; Bueno Filho, J.S.D.S.; Alvarenga, R.R.; Rodrigues, P.B.; Leite, D.F. Ensemble of hybrid Bayesian networks for predicting the AMEn of broiler feedstuffs. Comput. Electron. Agric. 2022 , 198 , 107067. [ Google Scholar ] [ CrossRef ]
  • Lu, Q.K.; Xie, Y.P.; Wei, L.F.; Wei, Z.Y.; Tian, S.; Liu, H.; Cao, L. Extended Attribute Profiles for Precise Crop Classification in UAV-Borne Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2024 , 21 , 2500805. [ Google Scholar ] [ CrossRef ]
  • Maeda, N.; Tonooka, H. Early Stage Forest Fire Detection from Himawari-8 AHI Images Using a Modified MOD14 Algorithm Combined with Machine Learning. Sensors 2023 , 23 , 210. [ Google Scholar ] [ CrossRef ]
  • Furuya, D.E.G.; Ma, L.F.; Pinheiro, M.M.F.; Gomes, F.D.G.; Gonçalvez, W.N.; Marcato, J.; Rodrigues, D.D.; Blassioli-Moraes, M.C.; Michereff, M.F.F.; Borges, M.; et al. Prediction of insect-herbivory-damage and insect-type attack in maize plants using hyperspectral data. Int. J. Appl. Earth Obs. Geoinf. 2021 , 105 , 102608. [ Google Scholar ] [ CrossRef ]
  • Javadi, S.H.; Guerrero, A.; Mouazen, A.M. Clustering and Smoothing Pipeline for Management Zone Delineation Using Proximal and Remote Sensing. Sensors 2022 , 22 , 645. [ Google Scholar ] [ CrossRef ]
  • Devarajan, G.G.; Nagarajan, S.M.; Ramana, T.V.; Vignesh, T.; Ghosh, U.; Alnumay, W. DDNSAS: Deep reinforcement learning based deep Q-learning network for smart agriculture system. Sust. Comput. 2023 , 39 , 100890. [ Google Scholar ] [ CrossRef ]
  • Din, A.; Ismail, M.Y.; Shah, B.B.; Babar, M.; Ali, F.; Baig, S.U. A deep reinforcement learning-based multi-agent area coverage control for smart agriculture. Comput. Electr. Eng. 2022 , 101 , 108089. [ Google Scholar ] [ CrossRef ]
  • García, R.; Aguilar, J.; Toro, M.; Pinto, A.; Rodríguez, P. A systematic literature review on the use of machine learning in precision livestock farming. Comput. Electron. Agric. 2020 , 179 , 105826. [ Google Scholar ] [ CrossRef ]
  • Shahab, H.; Iqbal, M.; Sohaib, A.; Ullah Khan, F.; Waqas, M. IoT-based agriculture management techniques for sustainable farming: A comprehensive review. Comput. Electron. Agric. 2024 , 220 , 108851. [ Google Scholar ] [ CrossRef ]
  • Rehman, T.U.; Mahmud, M.S.; Chang, Y.K.; Jin, J.; Shin, J. Current and future applications of statistical machine learning algorithms for agricultural machine vision systems. Comput. Electron. Agric. 2019 , 156 , 585–605. [ Google Scholar ] [ CrossRef ]
  • Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016 , 2016 , 3289801. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Li, J.; Qiao, Y.; Liu, S.; Zhang, J.; Yang, Z.; Wang, M. An improved YOLOv5-based vegetable disease detection method. Comput. Electron. Agric. 2022 , 202 , 107345. [ Google Scholar ] [ CrossRef ]
  • Ashwinkumar, S.; Rajagopal, S.; Manimaran, V.; Jegajothi, B. Automated plant leaf disease detection and classification using optimal MobileNet based convolutional neural networks. Mater. Today Proc. 2022 , 51 , 480–487. [ Google Scholar ] [ CrossRef ]
  • Yu, Y. Research Progress of Crop Disease Image Recognition Based on Wireless Network Communication and Deep Learning. Wirel. Commun. Mob. Comput. 2021 , 2021 , 7577349. [ Google Scholar ] [ CrossRef ]
  • Ang, Y.H.; Shafri, H.Z.M.; Lee, Y.P.; Abidin, H.; Bakar, S.A.; Hashim, S.J.; Che’Ya, N.N.; Hassan, M.R.; San Lim, H.; Abdullah, R. A novel ensemble machine learning and time series approach for oil palm yield prediction using Landsat time series imagery based on NDVI. Geocarto Int. 2022 , 37 , 9865–9896. [ Google Scholar ] [ CrossRef ]
  • Aydin, Y.; Isikdag, U.; Bekdas, G.; Nigdeli, S.M.; Geem, Z.W. Use of Machine Learning Techniques in Soil Classification. Sustainability 2023 , 15 , 2374. [ Google Scholar ] [ CrossRef ]
  • Osco, L.P.; Nogueira, K.; Marques Ramos, A.P.; Faita Pinheiro, M.M.; Furuya, D.E.G.; Gonçalves, W.N.; de Castro Jorge, L.A.; Marcato Junior, J.; dos Santos, J.A. Semantic segmentation of citrus-orchard using deep neural networks and multispectral UAV-based imagery. Precis. Agric. 2021 , 22 , 1171–1188. [ Google Scholar ] [ CrossRef ]
  • Kellenberger, B.; Marcos, D.; Tuia, D. Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning. Remote Sens. Environ. 2018 , 216 , 139–153. [ Google Scholar ] [ CrossRef ]
  • Kamath, R.; Balachandra, M.; Vardhan, A.; Maheshwari, U. Classification of paddy crop and weeds using semantic segmentation. Cogent Eng. 2022 , 9 , 2018791. [ Google Scholar ] [ CrossRef ]
  • Jin, X.; Sun, Y.; Che, J.; Bagavathiannan, M.; Yu, J.; Chen, Y. A novel deep learning-based method for detection of weeds in vegetables. Pest Manag. Sci. 2022 , 78 , 1861–1869. [ Google Scholar ] [ CrossRef ]
  • Xun, L.; Zhang, J.; Cao, D.; Wang, J.; Zhang, S.; Yao, F. Mapping cotton cultivated area combining remote sensing with a fused representation-based classification algorithm. Comput. Electron. Agric. 2021 , 181 , 105940. [ Google Scholar ] [ CrossRef ]
  • Zhao, H.; Huang, Y.; Wang, X.; Li, X.; Lei, T. The performance of SPEI integrated remote sensing data for monitoring agricultural drought in the North China Plain. Field Crops Res. 2023 , 302 , 109041. [ Google Scholar ] [ CrossRef ]
  • Lyu, X.; Li, X.; Dang, D.; Dou, H.; Xuan, X.; Liu, S.; Li, M.; Gong, J. A new method for grassland degradation monitoring by vegetation species composition using hyperspectral remote sensing. Ecol. Indic. 2020 , 114 , 106310. [ Google Scholar ] [ CrossRef ]
  • Xiao, D.; Niu, H.; Guo, F.; Zhao, S.; Fan, L. Monitoring irrigation dynamics in paddy fields using spatiotemporal fusion of Sentinel-2 and MODIS. Agric. Water Manag. 2022 , 263 , 107409. [ Google Scholar ] [ CrossRef ]
  • Zhang, G.; Xiao, X.; Dong, J.; Kou, W.; Jin, C.; Qin, Y.; Zhou, Y.; Wang, J.; Menarguez, M.A.; Biradar, C. Mapping paddy rice planting areas through time series analysis of MODIS land surface temperature and vegetation index data. ISPRS J. Photogramm. Remote Sens. 2015 , 106 , 157–171. [ Google Scholar ] [ CrossRef ]
  • Liu, J.-R.; Liu, Q.; Khoury, J.; Li, Y.-J.; Han, X.-H.; Li, J.; Ibla, J.C. Hypoxic preconditioning decreases nuclear factor κB activity via Disrupted in Schizophrenia-1. Int. J. Biochem. Cell Biol. 2016 , 70 , 140–148. [ Google Scholar ] [ CrossRef ]
  • Guo, Y.; Ren, H. Remote sensing monitoring of maize and paddy rice planting area using GF-6 WFV red edge features. Comput. Electron. Agric. 2023 , 207 , 107714. [ Google Scholar ] [ CrossRef ]
  • DeVries, B.; Verbesselt, J.; Kooistra, L.; Herold, M. Robust monitoring of small-scale forest disturbances in a tropical montane forest using Landsat time series. Remote Sens. Environ. 2015 , 161 , 107–121. [ Google Scholar ] [ CrossRef ]
  • Jevsenak, J.; Arnic, D.; Krajnc, L.; Skudnik, M. Machine Learning Forest Simulator (MLFS): R package for data-driven assessment of the future state of forests. Ecol. Inform. 2023 , 75 , 102115. [ Google Scholar ] [ CrossRef ]
  • Bagheri Bodaghabadi, M.; Martínez-Casasnovas, J.A.; Esfandiarpour Borujeni, I.; Salehi, M.H.; Mohammadi, J.; Toomanian, N. Database extension for digital soil mapping using artificial neural networks. Arab. J. Geosci. 2016 , 9 , 701. [ Google Scholar ] [ CrossRef ]
  • Dornik, A.; Drăguț, L.; Urdea, P. Classification of Soil Types Using Geographic Object-Based Image Analysis and Random Forests. Pedosphere 2018 , 28 , 913–925. [ Google Scholar ] [ CrossRef ]
  • Lu, H.; Liu, C.; Li, N.; Fu, X.; Li, L. Optimal segmentation scale selection and evaluation of cultivated land objects based on high-resolution remote sensing images with spectral and texture features. Environ. Sci. Pollut. Res. 2021 , 28 , 27067–27083. [ Google Scholar ] [ CrossRef ]
  • Rai, N.; Flores, P. Leveraging transfer learning in ArcGIS Pro to detect “doubles” in a sunflower field. In ASABE Annual International Virtual Meeting ; ASABE: St. Joseph, MI, USA, 2021; p. 1. [ Google Scholar ]
  • Butte, S.; Vakanski, A.; Duellman, K.; Wang, H.; Mirkouei, A. Potato crop stress identification in aerial images using deep learning-based object detection. Agron. J. 2021 , 113 , 3991–4002. [ Google Scholar ] [ CrossRef ]
  • Rong, J.; Zhou, H.; Zhang, F.; Yuan, T.; Wang, P. Tomato cluster detection and counting using improved YOLOv5 based on RGB-D fusion. Comput. Electron. Agric. 2023 , 207 , 107741. [ Google Scholar ] [ CrossRef ]
  • Guo, Q.; Potter, K.M.; Ren, H.; Zhang, P. Impacts of Exotic Pests on Forest Ecosystems: An Update. Forests 2023 , 14 , 605. [ Google Scholar ] [ CrossRef ]
  • Li, W.; Zheng, T.; Yang, Z.; Li, M.; Sun, C.; Yang, X. Classification and detection of insects from field images using deep learning for smart pest management: A systematic review. Ecol. Inform. 2021 , 66 , 101460. [ Google Scholar ] [ CrossRef ]
  • Sun, Y.; Liu, X.; Yuan, M.; Ren, L.; Wang, J.; Chen, Z. Automatic in-trap pest detection using deep learning for pheromone-based Dendroctonus valens monitoring. Biosyst. Eng. 2018 , 176 , 140–150. [ Google Scholar ] [ CrossRef ]
  • Partel, V.; Nunes, L.; Stansly, P.; Ampatzidis, Y. Automated vision-based system for monitoring Asian citrus psyllid in orchards utilizing artificial intelligence. Comput. Electron. Agric. 2019 , 162 , 328–336. [ Google Scholar ] [ CrossRef ]
  • Mahanta, D.K.; Bhoi, T.K.; Komal, J.; Samal, I.; Mastinu, A. Spatial, spectral and temporal insights: Harnessing high-resolution satellite remote sensing and artificial intelligence for early monitoring of wood boring pests in forests. Plant Stress. 2024 , 11 , 100381. [ Google Scholar ] [ CrossRef ]
  • Bhatnagar, S.; Mahanta, D.K.; Vyas, V.; Samal, I.; Komal, J.; Bhoi, T.K. Storage Pest Management with Nanopesticides Incorporating Silicon Nanoparticles: A Novel Approach for Sustainable Crop Preservation and Food Security. Silicon 2024 , 16 , 471–483. [ Google Scholar ] [ CrossRef ]
  • Barchenkov, A.; Rubtsov, A.; Safronova, I.; Astapenko, S.; Tabakova, K.; Bogdanova, K.; Anuev, E.; Arzac, A. Features of Scots Pine Mortality Due to Incursion of Pine Bark Beetles in Symbiosis with Ophiostomatoid Fungi in the Forest-Steppe of Central Siberia. Forests 2023 , 14 , 1301. [ Google Scholar ] [ CrossRef ]
  • Ballesteros, R.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Applications of georeferenced high-resolution images obtained with unmanned aerial vehicles. Part II: Application to maize and onion crops of a semi-arid region in Spain. Precis. Agric. 2014 , 15 , 593–614. [ Google Scholar ] [ CrossRef ]
  • Gopalakrishnan, R.; Subhash, C.; Kalpana, K. Predictive zoning of rice stem borer damage in southern India through spatial interpolation of weather-based models. J. Environ. Biol. 2014 , 35 , 923–928. [ Google Scholar ]
  • Nurfaiz Abd Kharim, M.; Wayayok, A.; Fikri Abdullah, A.; Rashid Mohamed Shariff, A.; Mohd Husin, E.; Razif Mahadi, M. Predictive zoning of pest and disease infestations in rice field based on UAV aerial imagery. Egypt. J. Remote Sens. Space Sci. 2022 , 25 , 831–840. [ Google Scholar ] [ CrossRef ]
  • Shi, Y.; Huang, W.; Luo, J.; Huang, L.; Zhou, X. Detection and discrimination of pests and diseases in winter wheat based on spectral indices and kernel discriminant analysis. Comput. Electron. Agric. 2017 , 141 , 171–180. [ Google Scholar ] [ CrossRef ]
  • Yuan, L.; Zhang, H.; Zhang, Y.; Xing, C.; Bao, Z. Feasibility assessment of multi-spectral satellite sensors in monitoring and discriminating wheat diseases and insects. Optik 2017 , 131 , 598–608. [ Google Scholar ] [ CrossRef ]
  • Ebrahimi, M.A.; Khoshtaghaza, M.H.; Minaei, S.; Jamshidi, B. Vision-based pest detection based on SVM classification method. Comput. Electron. Agric. 2017 , 137 , 52–58. [ Google Scholar ] [ CrossRef ]
  • Kumar, D.; Kukreja, V. An Instance Segmentation Approach for Wheat Yellow Rust Disease Recognition. In Proceedings of the International Conference on Decision Aid Sciences and Application (DASA), Sakheer, Bahrain, 7–8 December 2021; pp. 926–931. [ Google Scholar ]
  • Amarathunga, D.C.; Grundy, J.; Parry, H.; Dorin, A. Methods of insect image capture and classification: A Systematic literature review. Smart Agric. Technol. 2021 , 1 , 100023. [ Google Scholar ] [ CrossRef ]
  • Tetila, E.C.; Machado, B.B.; Menezes, G.V.; Belete, N.A.d.S.; Astolfi, G.; Pistori, H. A Deep-Learning Approach for Automatic Counting of Soybean Insect Pests. IEEE Geosci. Remote Sens. Lett. 2020 , 17 , 1837–1841. [ Google Scholar ] [ CrossRef ]
  • Abade, A.; Porto, L.F.; Ferreira, P.A.; de Barros Vidal, F. NemaNet: A convolutional neural network model for identification of soybean nematodes. Biosyst. Eng. 2022 , 213 , 39–62. [ Google Scholar ] [ CrossRef ]
  • Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018 , 147 , 70–90. [ Google Scholar ] [ CrossRef ]
  • Li, R.; Wang, R.; Zhang, J.; Xie, C.; Liu, L.; Wang, F.; Chen, H.; Chen, T.; Hu, H.; Jia, X.; et al. An Effective Data Augmentation Strategy for CNN-Based Pest Localization and Recognition in the Field. IEEE Access 2019 , 7 , 160274–160283. [ Google Scholar ] [ CrossRef ]
  • Vélez, S.; Ariza-Sentís, M.; Valente, J. Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery. Eur. J. Agron. 2023 , 142 , 126691. [ Google Scholar ] [ CrossRef ]
  • Gomez Selvaraj, M.; Vergara, A.; Montenegro, F.; Alonso Ruiz, H.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of banana plants and their major diseases through aerial images and machine learning methods: A case study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020 , 169 , 110–124. [ Google Scholar ] [ CrossRef ]
  • Alshammari, H.H.; Alzahrani, A. Employing a hybrid lion-firefly algorithm for recognition and classification of olive leaf disease in Saudi Arabia. Alexandria. Eng. J. 2023 , 84 , 215–226. [ Google Scholar ] [ CrossRef ]
  • Zhang, T.; Xu, Z.; Su, J.; Yang, Z.; Liu, C.; Chen, W.-H.; Li, J. Ir-UNet: Irregular Segmentation U-Shape Network for Wheat Yellow Rust Detection by UAV Multispectral Imagery. Remote Sens. 2021 , 13 , 3892. [ Google Scholar ] [ CrossRef ]
  • Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field. Remote Sens. 2018 , 10 , 395. [ Google Scholar ] [ CrossRef ]
  • Zhang, Y.; Lv, C. TinySegformer: A lightweight visual segmentation model for real-time agricultural pest detection. Comput. Electron. Agric. 2024 , 218 , 108740. [ Google Scholar ] [ CrossRef ]
  • Lu, S.; Ye, S.-j. Using an image segmentation and support vector machine method for identifying two locust species and instars. J. Integr. Agric. 2020 , 19 , 1301–1313. [ Google Scholar ] [ CrossRef ]
  • Barbedo, J.G.A.; Tibola, C.S.; Fernandes, J.M.C. Detecting Fusarium head blight in wheat kernels using hyperspectral imaging. Biosyst. Eng. 2015 , 131 , 65–76. [ Google Scholar ] [ CrossRef ]
  • Mumtaz, R.; Maqsood, M.H.; Haq Iu Shafi, U.; Mahmood, Z.; Mumtaz, M. Integrated digital image processing techniques and deep learning approaches for wheat stripe rust disease detection and grading. Decis. Anal. J. 2023 , 8 , 100305. [ Google Scholar ] [ CrossRef ]
  • Bao, W.; Zhu, Z.; Hu, G.; Zhou, X.; Zhang, D.; Yang, X. UAV remote sensing detection of tea leaf blight based on DDMA-YOLO. Comput. Electron. Agric. 2023 , 205 , 107637. [ Google Scholar ] [ CrossRef ]
  • Li, D.; Song, Z.; Quan, C.; Xu, X.; Liu, C. Recent advances in image fusion technology in agriculture. Comput. Electron. Agric. 2021 , 191 , 106491. [ Google Scholar ] [ CrossRef ]
  • Ali, M.A.; Sharma, A.K.; Dhanaraj, R.K. Heterogeneous features and deep learning networks fusion-based pest detection, prevention and controlling system using IoT and pest sound analytics in a vast agriculture system. Comput. Electr. Eng. 2024 , 116 , 109146. [ Google Scholar ] [ CrossRef ]
  • Lin, Q.; Huang, H.; Wang, J.; Chen, L.; Du, H.; Zhou, G. Early detection of pine shoot beetle attack using vertical profile of plant traits through UAV-based hyperspectral, thermal, and lidar data fusion. Int. J. Appl. Earth Obs. Geoinf. 2023 , 125 , 103549. [ Google Scholar ] [ CrossRef ]
  • Dalagnol, R.; Phillips, O.L.; Gloor, E.; Galvão, L.S.; Wagner, F.H.; Locks, C.J.; Aragão, L.E.O.C. Quantifying Canopy Tree Loss and Gap Recovery in Tropical Forests under Low-Intensity Logging Using VHR Satellite Imagery and Airborne LiDAR. Remote Sens. 2019 , 11 , 817. [ Google Scholar ] [ CrossRef ]
  • Pantazi, X.E.; Moshou, D.; Bochtis, D. Chapter 5-Tutorial II: Disease detection with fusion techniques. In Intelligent Data Mining and Fusion Systems in Agriculture ; Pantazi, X.E., Moshou, D., Bochtis, D., Eds.; Academic Press: Cambridge, MA, USA, 2020; pp. 199–221. [ Google Scholar ]
  • Kaya, Y.; Gürsoy, E. A novel multi-head CNN design to identify plant diseases using the fusion of RGB images. Ecol. Inform. 2023 , 75 , 101998. [ Google Scholar ] [ CrossRef ]
  • Ma, R.; Zhang, N.; Zhang, X.; Bai, T.; Yuan, X.; Bao, H.; He, D.; Sun, W.; He, Y. Cotton Verticillium wilt monitoring based on UAV multispectral-visible multi-source feature fusion. Comput. Electron. Agric. 2024 , 217 , 108628. [ Google Scholar ] [ CrossRef ]
  • De Cesaro Júnior, T.; Rieder, R.; Di Domênico, J.R.; Lau, D. InsectCV: A system for insect detection in the lab from trap images. Ecol. Inform. 2022 , 67 , 101516. [ Google Scholar ] [ CrossRef ]
  • Ishengoma, F.S.; Rai, I.A.; Ngoga, S.R. Hybrid convolution neural network model for a quicker detection of infested maize plants with fall armyworms using UAV-based images. Ecol. Inform. 2022 , 67 , 101502. [ Google Scholar ] [ CrossRef ]
  • Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Hassanien, A.E.; Pandey, H.M. An optimized dense convolutional neural network model for disease recognition and classification in corn leaf. Comput. Electron. Agric. 2020 , 175 , 105456. [ Google Scholar ] [ CrossRef ]
  • Sunil, C.K.; Jaidhar, C.D.; Patil, N. Tomato plant disease classification using Multilevel Feature Fusion with adaptive channel spatial and pixel attention mechanism. Expert Syst. Appl. 2023 , 228 , 120381. [ Google Scholar ] [ CrossRef ]
  • Dong, S.; Teng, Y.; Jiao, L.; Du, J.; Liu, K.; Wang, R. ESA-Net: An efficient scale-aware network for small crop pest detection. Expert Syst. Appl. 2024 , 236 , 121308. [ Google Scholar ] [ CrossRef ]
  • Amarathunga, D.C.; Ratnayake, M.N.; Grundy, J.; Dorin, A. Fine-grained image classification of microscopic insect pest species: Western Flower thrips and Plague thrips. Comput. Electron. Agric. 2022 , 203 , 107462. [ Google Scholar ] [ CrossRef ]
  • Ye, W.; Lao, J.; Liu, Y.; Chang, C.-C.; Zhang, Z.; Li, H.; Zhou, H. Pine pest detection using remote sensing satellite images combined with a multi-scale attention-UNet model. Ecol. Inform. 2022 , 72 , 101906. [ Google Scholar ] [ CrossRef ]
  • Kaliraj, S.; Adhikari, K.; Dharumarajan, S.; Lalitha, M.; Kumar, N. Chapter 3-Remote sensing and geographic information system applications. In Mapping and Assessment of Soil Resources ; Dharumarajan, S., Kaliraj, S., Adhikari, K., Lalitha, M., Kumar, N., Eds.; Remote Sensing of Soils Elsevier: Amsterdam, The Netherlands, 2024; pp. 25–41. [ Google Scholar ]
  • Yang, H.; Zhang, X.; Xu, M.; Shao, S.; Wang, X.; Liu, W.; Wu, D.; Ma, Y.; Bao, Y.; Zhang, X.; et al. Hyper-temporal remote sensing data in bare soil period and terrain attributes for digital soil mapping in the Black soil regions of China. Catena 2020 , 184 , 104259. [ Google Scholar ] [ CrossRef ]
  • Das, B.; Rathore, P.; Roy, D.; Chakraborty, D.; Bhattacharya, B.K.; Mandal, D.; Jatav, R.; Sethi, D.; Mukherjee, J.; Sehgal, V.K.; et al. Ensemble surface soil moisture estimates at farm-scale combining satellite-based optical-thermal-microwave remote sensing observations. Agric. For. Meteorol. 2023 , 339 , 109567. [ Google Scholar ] [ CrossRef ]
  • Dash, P.K. Chapter 22—Remote sensing as a potential tool for advancing digital soil mapping. In Remote Sensing of Soils ; Dharumarajan, S., Kaliraj, S., Adhikari, K., Lalitha, M., Kumar, N., Eds.; Elsevier: Amsterdam, The Netherlands, 2024; pp. 357–370. [ Google Scholar ]
  • Das, S.; Ghimire, D. Chapter 25—Soil organic carbon: Measurement and monitoring using remote sensing data. In Remote Sensing of Soils ; Dharumarajan, S., Kaliraj, S., Adhikari, K., Lalitha, M., Kumar, N., Eds.; Elsevier: Amsterdam, The Netherlands, 2024; pp. 395–409. [ Google Scholar ]
  • Hareesh, S.B. Chapter 7—The latest applications of remote sensing technologies for soil management in precision agriculture practices. In Remote Sensing in Precision Agriculture ; Lamine, S., Srivastava, P.K., Kayad, A., Muñoz-Arriola, F., Pandey, P.C., Eds.; Academic Press: Cambridge, MA, USA, 2024; pp. 105–135. [ Google Scholar ]
  • Peña-Arancibia, J.L.; Mainuddin, M.; Kirby, J.M.; Chiew, F.H.S.; McVicar, T.R.; Vaze, J. Assessing irrigated agriculture’s surface water and groundwater consumption by combining satellite remote sensing and hydrologic modelling. Sci. Total Environ. 2016 , 542 , 372–382. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Li, Q.; Hao, H.; Zhao, Y.; Geng, Q.; Liu, G.; Zhang, Y.; Yu, F. GANs-LSTM Model for Soil Temperature Estimation From Meteorological: A New Approach. IEEE Access 2020 , 8 , 59427–59443. [ Google Scholar ] [ CrossRef ]
  • Li, Q.; Li, Z.; Shangguan, W.; Wang, X.; Li, L.; Yu, F. Improving soil moisture prediction using a novel encoder-decoder model with residual learning. Comput. Electron. Agric. 2022 , 195 , 106816. [ Google Scholar ] [ CrossRef ]
  • Mohanty, B.P.; Cosh, M.H.; Lakshmi, V.; Montzka, C. Soil Moisture Remote Sensing: State-of-the-Science. Vadose Zone J. 2017 , 16 , 1–9. [ Google Scholar ] [ CrossRef ]
  • Maynard, J.J.; Levi, M.R. Hyper-temporal remote sensing for digital soil mapping: Characterizing soil-vegetation response to climatic variability. Geoderma 2017 , 285 , 94–109. [ Google Scholar ] [ CrossRef ]
  • Duan, M.; Song, X.; Li, Z.; Zhang, X.; Ding, X.; Cui, D. Identifying soil groups and selecting a high-accuracy classification method based on multi-textural features with optimal window sizes using remote sensing images. Ecol. Inform. 2024 , 81 , 102563. [ Google Scholar ] [ CrossRef ]
  • Zhou, Q.B.; Yu, Q.Y.; Liu, J.; Wu, W.B.; Tang, H.J. Perspective of Chinese GF-1 high-resolution satellite data in agricultural remote sensing monitoring. J. Integr. Agric. 2017 , 16 , 242–251. [ Google Scholar ] [ CrossRef ]
  • Musasa, T.; Dube, T.; Marambanyika, T. Landsat satellite programme potential for soil erosion assessment and monitoring in arid environments: A review of applications and challenges. Int. Soil Water Conserv. Res. 2023 , 12 , 267–278. [ Google Scholar ] [ CrossRef ]
  • Wang, J.; Zhang, Y.; Song, P.; Tian, J. Estimating sub-daily resolution soil moisture using Fengyun satellite data and machine learning. J. Hydrol. 2024 , 632 , 130814. [ Google Scholar ] [ CrossRef ]
  • Kolassa, J.; Reichle, R.H.; Liu, Q.; Alemohammad, S.H.; Gentine, P.; Aida, K.; Asanuma, J.; Bircher, S.; Caldwell, T.; Colliander, A.; et al. Estimating surface soil moisture from SMAP observations using a Neural Network technique. Remote Sens. Environ. 2018 , 204 , 43–59. [ Google Scholar ] [ CrossRef ]
  • Wang La Zhou, X.; Zhu, X.; Dong, Z.; Guo, W. Estimation of biomass in wheat using random forest regression algorithm and remote sensing data. Crop J. 2016 , 4 , 212–219. [ Google Scholar ] [ CrossRef ]
  • Yang, H.; Xiong, L.; Liu, D.; Cheng, L.; Chen, J. High spatial resolution simulation of profile soil moisture by assimilating multi-source remote-sensed information into a distributed hydrological model. J. Hydrol. 2021 , 597 , 126311. [ Google Scholar ] [ CrossRef ]
  • Mammadov, E.; Nowosad, J.; Glaesser, C. Estimation and mapping of surface soil properties in the Caucasus Mountains, Azerbaijan using high-resolution remote sensing data. Geoderma Reg. 2021 , 26 , e00411. [ Google Scholar ] [ CrossRef ]
  • Straffelini, E.; Pijl, A.; Otto, S.; Marchesini, E.; Pitacco, A.; Tarolli, P. A high-resolution physical modelling approach to assess runoff and soil erosion in vineyards under different soil managements. Soil Tillage Res. 2022 , 222 , 105418. [ Google Scholar ] [ CrossRef ]
  • Koley, S.; Jeganathan, C. Estimation and evaluation of high spatial resolution surface soil moisture using multi-sensor multi-resolution approach. Geoderma 2020 , 378 , 114618. [ Google Scholar ] [ CrossRef ]
  • Bertalan, L.; Holb, I.; Pataki, A.; Négyesi, G.; Szabó, G.; Kupásné Szalóki, A.; Szabo, S. UAV-based multispectral and thermal cameras to predict soil water content–A machine learning approach. Comput. Electron. Agric. 2022 , 200 , 107262. [ Google Scholar ] [ CrossRef ]
  • Menzies Pluer, E.G.; Robinson, D.T.; Meinen, B.U.; Macrae, M.L. Pairing soil sampling with very-high resolution UAV imagery: An examination of drivers of soil and nutrient movement and agricultural productivity in southern Ontario. Geoderma 2020 , 379 , 114630. [ Google Scholar ] [ CrossRef ]
  • Cheng, M.; Jiao, X.; Liu, Y.; Shao, M.; Yu, X.; Bai, Y.; Wang, Z.; Wang, S.; Tuohuti, N.; Liu, S.; et al. Estimation of soil moisture content under high maize canopy coverage from UAV multimodal data and machine learning. Agric. Water Manag. 2022 , 264 , 107530. [ Google Scholar ] [ CrossRef ]
  • Huuskonen, J.; Oksanen, T. Soil sampling with drones and augmented reality in precision agriculture. Comput. Electron. Agric. 2018 , 154 , 25–35. [ Google Scholar ] [ CrossRef ]
  • Shokati, H.; Mashal, M.; Noroozi, A.; Mirzaei, S.; Mohammadi-Doqozloo, Z. Assessing soil moisture levels using visible UAV imagery and machine learning models. Remote Sens. Appl. Soc. Environ. 2023 , 32 , 101076. [ Google Scholar ] [ CrossRef ]
  • Wang, Z.; Zhang, X.; Zhang, F.; Chan, N.W.; Kung, H.-t.; Liu, S.; Deng, L. Estimation of soil salt content using machine learning techniques based on remote-sensing fractional derivatives, a case study in the Ebinur Lake Wetland National Nature Reserve, Northwest China. Ecol. Indic. 2020 , 119 , 106869. [ Google Scholar ] [ CrossRef ]
  • Ma, S.; He, B.; Ge, X.; Luo, X. Spatial prediction of soil salinity based on the Google Earth Engine platform with multitemporal synthetic remote sensing images. Ecol. Inform. 2023 , 75 , 102111. [ Google Scholar ] [ CrossRef ]
  • Du, R.; Chen, J.; Xiang, Y.; Xiang, R.; Yang, X.; Wang, T.; He, Y.; Wu, Y.; Yin, H.; Zhang, Z.; et al. Timely monitoring of soil water-salt dynamics within cropland by hybrid spectral unmixing and machine learning models. Int. Soil Water Conserv. Res. 2023 , 12 , 726–740. [ Google Scholar ] [ CrossRef ]
  • Golestani, M.; Mosleh Ghahfarokhi, Z.; Esfandiarpour-Boroujeni, I.; Shirani, H. Evaluating the spatiotemporal variations of soil salinity in Sirjan Playa, Iran using Sentinel-2A and Landsat-8 OLI imagery. Catena 2023 , 231 , 107375. [ Google Scholar ] [ CrossRef ]
  • Sothe, C.; Gonsamo, A.; Arabian, J.; Snider, J. Large scale mapping of soil organic carbon concentration with 3D machine learning and satellite observations. Geoderma 2022 , 405 , 115402. [ Google Scholar ] [ CrossRef ]
  • Rahman, A.; Abdullah, H.M.; Tanzir, M.T.; Hossain, M.J.; Khan, B.M.; Miah, M.G.; Islam, I. Performance of different machine learning algorithms on satellite image classification in rural and urban setup. Remote Sens. Appl. Soc. Environ. 2020 , 20 , 100410. [ Google Scholar ] [ CrossRef ]
  • Huang, H.; Wang, J.; Liu, C.; Liang, L.; Li, C.; Gong, P. The migration of training samples towards dynamic global land cover mapping. ISPRS J. Photogramm. Remote Sens. 2020 , 161 , 27–36. [ Google Scholar ] [ CrossRef ]
  • Zafar, Z.; Zubair, M.; Zha, Y.; Fahd, S.; Ahmad Nadeem, A. Performance assessment of machine learning algorithms for mapping of land use/land cover using remote sensing data. Egypt. J. Remote Sens. Space Sci. 2024 , 27 , 216–226. [ Google Scholar ] [ CrossRef ]
  • Elhadi, M.I.A.; Mutanga, O.; Odindi, J.; Abdel-Rahman, E.M. Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: Evaluating the performance of random forest and support vector machines classifiers. Int. J. Remote Sens. 2014 , 35 , 3440–3458. [ Google Scholar ]
  • Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016 , 116 , 55–72. [ Google Scholar ] [ CrossRef ]
  • Matlhodi, B.; Kenabatho, P.K.; Parida, B.P.; Maphanyane, J.G. Evaluating Land Use and Land Cover Change in the Gaborone Dam Catchment, Botswana, from 1984–2015 Using GIS and Remote Sensing. Sustainability 2019 , 11 , 5174. [ Google Scholar ] [ CrossRef ]
  • Liu, J.; Yang, K.; Tariq, A.; Lu, L.; Soufan, W.; El Sabagh, A. Interaction of climate, topography and soil properties with cropland and cropping pattern using remote sensing data and machine learning methods. Egypt. J. Remote Sens. Space Sci. 2023 , 26 , 415–426. [ Google Scholar ] [ CrossRef ]
  • Yuh, Y.G.; Tracz, W.; Matthews, H.D.; Turner, S.E. Application of machine learning approaches for land cover monitoring in northern Cameroon. Ecol. Inform. 2023 , 74 , 101955. [ Google Scholar ] [ CrossRef ]
  • Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016 , 177 , 89–100. [ Google Scholar ] [ CrossRef ]
  • Nitze, I.; Barrett, B.; Cawkwell, F. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series. Int. J. Appl. Earth Obs. Geoinf. 2015 , 34 , 136–146. [ Google Scholar ] [ CrossRef ]
  • Zhang, S.; Liu, L.Y. The potential of the MERIS Terrestrial Chlorophyll Index for crop yield prediction. Remote Sens. Lett. 2014 , 5 , 733–742. [ Google Scholar ] [ CrossRef ]
  • Teodoro, A. Applicability of data mining algorithms in the identification of beach features/patterns on high-resolution satellite data. J. Appl. Remote Sens. 2015 , 9 , 095095. [ Google Scholar ] [ CrossRef ]
  • Sinha, S.; Sharma, L.K.; Nathawat, M.S. Improved Land-use/Land-cover classification of semi-arid deciduous forest landscape using thermal remote sensing. Egypt. J. Remote Sens. Space Sci. 2015 , 18 , 217–233. [ Google Scholar ] [ CrossRef ]
  • Mei, A.; Manzo, C.; Fontinovo, G.; Bassani, C.; Allegrini, A.; Petracchini, F. Assessment of land cover changes in Lampedusa Island (Italy) using Landsat TM and OLI data. J. Afr. Earth Sci. 2016 , 122 , 15–24. [ Google Scholar ] [ CrossRef ]
  • Silva, L.P.E.; Xavier, A.P.C.; da Silva, R.M.; Santos, C.A.G. Modeling land cover change based on an artificial neural network for a semiarid river basin in northeastern Brazil. Glob. Ecol. Conserv. 2020 , 21 , e00811. [ Google Scholar ] [ CrossRef ]
  • Zhang, H.K.; Roy, D.P.; Luo, D. Demonstration of large area land cover classification with a one dimensional convolutional neural network applied to single pixel temporal metric percentiles. Remote Sens. Environ. 2023 , 295 , 113653. [ Google Scholar ] [ CrossRef ]
  • Zhang, C.; Yue, P.; Tapete, D.; Shangguan, B.; Wang, M.; Wu, Z. A multi-level context-guided classification method with object-based convolutional neural network for land cover classification using very high resolution remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2020 , 88 , 102086. [ Google Scholar ] [ CrossRef ]
  • Loukika, K.N.; Keesara, V.R.; Sridhar, V. Analysis of Land Use and Land Cover Using Machine Learning Algorithms on Google Earth Engine for Munneru River Basin, India. Sustainability 2021 , 13 , 13758. [ Google Scholar ] [ CrossRef ]
  • Prasad, P.; Loveson, V.J.; Chandra, P.; Kotha, M. Evaluation and comparison of the earth observing sensors in land cover/land use studies using machine learning algorithms. Ecol. Inform. 2022 , 68 , 101522. [ Google Scholar ] [ CrossRef ]
  • Zhou, X.; Zheng, H.B.; Xu, X.Q.; He, J.Y.; Ge, X.K.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.X.; Tian, Y.C. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017 , 130 , 246–255. [ Google Scholar ] [ CrossRef ]
  • Wang, L.; Tian, Y.; Yao, X.; Zhu, Y.; Cao, W. Predicting grain yield and protein content in wheat by fusing multi-sensor and multi-temporal remote-sensing images. Field Crops Res. 2014 , 164 , 178–188. [ Google Scholar ] [ CrossRef ]
  • Furukawa, F.; Maruyama, K.; Saito, Y.K.; Kaneko, M. Corn Height Estimation Using UAV for Yield Prediction and Crop Monitoring. In Unmanned Aerial Vehicle: Applications in Agriculture and Environment ; Avtar, R., Watanabe, T., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 51–69. [ Google Scholar ]
  • Johnson, D.M. An assessment of pre- and within-season remotely sensed variables for forecasting corn and soybean yields in the United States. Remote Sens. Environ. 2014 , 141 , 116–128. [ Google Scholar ] [ CrossRef ]
  • Shao, M.; Nie, C.; Zhang, A.; Shi, L.; Zha, Y.; Xu, H.; Yang, H.; Yu, X.; Bai, Y.; Liu, S.; et al. Quantifying effect of maize tassels on LAI estimation based on multispectral imagery and machine learning methods. Comput. Electron. Agric. 2023 , 211 , 108029. [ Google Scholar ] [ CrossRef ]
  • Yang, C.; Lee, W.S.; Gader, P. Hyperspectral band selection for detecting different blueberry fruit maturity stages. Comput. Electron. Agric. 2014 , 109 , 23–31. [ Google Scholar ] [ CrossRef ]
  • Peña, M.A.; Brenning, A. Assessing fruit-tree crop classification from Landsat-8 time series for the Maipo Valley, Chile. Remote Sens. Environ. 2015 , 171 , 234–244. [ Google Scholar ] [ CrossRef ]
  • Liang, L.; Di, L.; Zhang, L.; Deng, M.; Qin, Z.; Zhao, S.; Lin, H. Estimation of crop LAI using hyperspectral vegetation indices and a hybrid inversion method. Remote Sens. Environ. 2015 , 165 , 123–134. [ Google Scholar ] [ CrossRef ]
  • Yang, Z.; Shao, Y.; Li, K.; Liu, Q.; Liu, L.; Brisco, B. An improved scheme for rice phenology estimation based on time-series multispectral HJ-1A/B and polarimetric RADARSAT-2 data. Remote Sens. Environ. 2017 , 195 , 184–201. [ Google Scholar ] [ CrossRef ]
  • Azadbakht, M.; Ashourloo, D.; Aghighi, H.; Homayouni, S.; Shahrabi, H.S.; Matkan, A.; Radiom, S. Alfalfa yield estimation based on time series of Landsat 8 and PROBA-V images: An investigation of machine learning techniques and spectral-temporal features. Remote Sens. Appl. Soc. Environ. 2022 , 25 , 100657. [ Google Scholar ] [ CrossRef ]
  • Görgens, E.B.; Montaghi, A.; Rodriguez, L.C.E. A performance comparison of machine learning methods to estimate the fast-growing forest plantation yield based on laser scanning metrics. Comput. Electron. Agric. 2015 , 116 , 221–227. [ Google Scholar ] [ CrossRef ]
  • Guo, Z.; Chamberlin, J.; You, L. Smallholder maize yield estimation using satellite data and machine learning in Ethiopia. Crop Environ. 2023 , 2 , 165–174. [ Google Scholar ] [ CrossRef ]
  • Van Ewijk, K.Y.; Randin, C.F.; Treitz, P.M.; Scott, N.A. Predicting fine-scale tree species abundance patterns using biotic variables derived from LiDAR and high spatial resolution imagery. Remote Sens. Environ. 2014 , 150 , 120–131. [ Google Scholar ] [ CrossRef ]
  • Khanal, S.; Klopfenstein, A.; Kc, K.; Ramarao, V.; Fulton, J.; Douridas, N.; Shearer, S.A. Assessing the impact of agricultural field traffic on corn grain yield using remote sensing and machine learning. Soil Tillage Res. 2021 , 208 , 104880. [ Google Scholar ] [ CrossRef ]
  • Habibi, L.N.; Matsui, T.; Tanaka, T.S.T. Critical evaluation of the effects of a cross-validation strategy and machine learning optimization on the prediction accuracy and transferability of a soybean yield prediction model using UAV-based remote sensing. J. Agric. Food Res. 2024 , 16 , 101096. [ Google Scholar ] [ CrossRef ]
  • Zhang, S.; Qi, X.; Gao, M.; Dai, C.; Yin, G.; Ma, D.; Feng, W.; Guo, T.; He, L. Estimation of wheat protein content and wet gluten content based on fusion of hyperspectral and RGB sensors using machine learning algorithms. Food Chem. 2024 , 448 , 139103. [ Google Scholar ] [ CrossRef ]
  • Guo, Y.; Xiao, Y.; Hao, F.; Zhang, X.; Chen, J.; de Beurs, K.; He, Y.; Fu, Y.H. Comparison of different machine learning algorithms for predicting maize grain yield using UAV-based hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2023 , 124 , 103528. [ Google Scholar ] [ CrossRef ]
  • Qu, H.; Zheng, C.; Ji, H.; Barai, K.; Zhang, Y.-J. A fast and efficient approach to estimate wild blueberry yield using machine learning with drone photography: Flight altitude, sampling method and model effects. Comput. Electron. Agric. 2024 , 216 , 108543. [ Google Scholar ] [ CrossRef ]
  • Yu, N.; Li, L.; Schmitz, N.; Tian, L.F.; Greenberg, J.A.; Diers, B.W. Development of methods to improve soybean yield estimation and predict plant maturity with an unmanned aerial vehicle based platform. Remote Sens. Environ. 2016 , 187 , 91–101. [ Google Scholar ] [ CrossRef ]
  • Maimaitijiang, M.; Ghulam, A.; Sidike, P.; Hartling, S.; Maimaitiyiming, M.; Peterson, K.; Shavers, E.; Fishman, J.; Peterson, J.; Kadam, S.; et al. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine. ISPRS J. Photogramm. Remote Sens. 2017 , 134 , 43–58. [ Google Scholar ] [ CrossRef ]
  • Xu, W.; Chen, P.; Zhan, Y.; Chen, S.; Zhang, L.; Lan, Y. Cotton yield estimation model based on machine learning using time series UAV remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2021 , 104 , 102511. [ Google Scholar ] [ CrossRef ]
  • Liu, S.; Jin, X.; Bai, Y.; Wu, W.; Cui, N.; Cheng, M.; Liu, Y.; Meng, L.; Jia, X.; Nie, C.; et al. UAV multispectral images for accurate estimation of the maize LAI considering the effect of soil background. Int. J. Appl. Earth Obs. Geoinf. 2023 , 121 , 103383. [ Google Scholar ] [ CrossRef ]
  • Kern, A.; Barcza, Z.; Marjanović, H.; Árendás, T.; Fodor, N.; Bónis, P.; Bognár, P.; Lichtenberger, J. Statistical modelling of crop yield in Central Europe using climate data and remote sensing vegetation indices. Agric. For. Meteorol. 2018 , 260 , 300–320. [ Google Scholar ] [ CrossRef ]
  • Bai, H.; Xiao, D.; Tang, J.; Liu, D.L. Evaluation of wheat yield in North China Plain under extreme climate by coupling crop model with machine learning. Comput. Electron. Agric. 2024 , 217 , 108651. [ Google Scholar ] [ CrossRef ]
  • Khanal, S.; Fulton, J.; Klopfenstein, A.; Douridas, N.; Shearer, S. Integration of high resolution remotely sensed data and machine learning techniques for spatial prediction of soil properties and corn yield. Comput. Electron. Agric. 2018 , 153 , 213–225. [ Google Scholar ] [ CrossRef ]
  • Jagdeep, S.; Gobinder, S.; Gupta, N. Balancing phosphorus fertilization for sustainable maize yield and soil test phosphorus management: A long-term study using machine learning. Field Crops Res. 2023 , 304 , 109169. [ Google Scholar ] [ CrossRef ]
  • Fry, J.; Guber, A.K.; Ladoni, M.; Munoz, J.D.; Kravchenko, A.N. The effect of up-scaling soil properties and model parameters on predictive accuracy of DSSAT crop simulation model under variable weather conditions. Geoderma 2017 , 287 , 105–115. [ Google Scholar ] [ CrossRef ]
  • Zain, M.; Si, Z.; Li, S.; Gao, Y.; Mehmood, F.; Rahman, S.-U.; Mounkaila Hamani, A.K.; Duan, A. The Coupled Effects of Irrigation Scheduling and Nitrogen Fertilization Mode on Growth, Yield and Water Use Efficiency in Drip-Irrigated Winter Wheat. Sustainability 2021 , 13 , 2742. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.; Shi, W.; Wen, T. Prediction of winter wheat yield and dry matter in North China Plain using machine learning algorithms for optimal water and nitrogen application. Agric. Water Manag. 2023 , 277 , 108140. [ Google Scholar ] [ CrossRef ]
  • Kaur Dhaliwal, J.; Panday, D.; Saha, D.; Lee, J.; Jagadamma, S.; Schaeffer, S.; Mengistu, A. Predicting and interpreting cotton yield and its determinants under long-term conservation management practices using machine learning. Comput. Electron. Agric. 2022 , 199 , 107107. [ Google Scholar ] [ CrossRef ]
  • Elavarasan, D.; Vincent, D.R.; Sharma, V.; Zomaya, A.Y.; Srinivasan, K. Forecasting yield by integrating agrarian factors and machine learning models: A survey. Comput. Electron. Agric. 2018 , 155 , 257–282. [ Google Scholar ] [ CrossRef ]
  • Singh, B.; Jana, A.K. Forecast of agri-residues generation from rice, wheat and oilseed crops in India using machine learning techniques: Exploring strategies for sustainable smart management. Environ. Res. 2024 , 245 , 117993. [ Google Scholar ] [ CrossRef ]
  • Zhou, H.K.; Yang, J.H.; Lou, W.D.; Sheng, L.; Li, D.; Hu, H. Improving grain yield prediction through fusion of multi-temporal spectral features and agronomic trait parameters derived from UAV imagery. Front. Plant Sci. 2023 , 14 , 1217448. [ Google Scholar ] [ CrossRef ]
  • Habyarimana, E.; Piccard, I.; Catellani, M.; De Franceschi, P.; Dall’Agata, M. Towards Predictive Modeling of Sorghum Biomass Yields Using Fraction of Absorbed Photosynthetically Active Radiation Derived from Sentinel-2 Satellite Imagery and Supervised Machine Learning Techniques. Agronomy 2019 , 9 , 203. [ Google Scholar ] [ CrossRef ]
  • Kowalik, W.; Dabrowska-Zielinska, K.; Meroni, M.; Raczka, T.U.; de Wit, A. Yield estimation using SPOT-VEGETATION products: A case study of wheat in European countries. Int. J. Appl. Earth Obs. Geoinf. 2014 , 32 , 228–239. [ Google Scholar ] [ CrossRef ]
  • Castaldi, F.; Casa, R.; Pelosi, F.; Yang, H. Influence of acquisition time and resolution on wheat yield estimation at the field scale from canopy biophysical variables retrieved from SPOT satellite data. Int. J. Remote Sens. 2015 , 36 , 2438–2459. [ Google Scholar ] [ CrossRef ]
  • Naghdyzadegan Jahromi, M.; Zand-Parsa, S.; Razzaghi, F.; Jamshidi, S.; Didari, S.; Doosthosseini, A.; Pourghasemi, H.R. Developing machine learning models for wheat yield prediction using ground-based data, satellite-based actual evapotranspiration and vegetation indices. Eur. J. Agron. 2023 , 146 , 126820. [ Google Scholar ] [ CrossRef ]
  • Jurečka, F.; Fischer, M.; Hlavinka, P.; Balek, J.; Semerádová, D.; Bláhová, M.; Anderson, M.C.; Hain, C.; Žalud, Z.; Trnka, M. Potential of water balance and remote sensing-based evapotranspiration models to predict yields of spring barley and winter wheat in the Czech Republic. Agric. Water Manag. 2021 , 256 , 107064. [ Google Scholar ] [ CrossRef ]
  • Yang, C.; Lei, H. Evaluation of data assimilation strategies on improving the performance of crop modeling based on a novel evapotranspiration assimilation framework. Agric. For. Meteorol. 2024 , 346 , 109882. [ Google Scholar ] [ CrossRef ]
  • Gilardelli, C.; Stella, T.; Confalonieri, R.; Ranghetti, L.; Campos-Taberner, M.; García-Haro, F.J.; Boschetti, M. Downscaling rice yield simulation at sub-field scale using remotely sensed LAI data. Eur. J. Agron. 2019 , 103 , 108–116. [ Google Scholar ] [ CrossRef ]
  • Gaso, D.V.; de Wit, A.; Berger, A.G.; Kooistra, L. Predicting within-field soybean yield variability by coupling Sentinel-2 leaf area index with a crop growth model. Agric. For. Meteorol. 2021 , 308 , 108553. [ Google Scholar ] [ CrossRef ]
  • Liu, C.; Liu, Y.; Lu, Y.H.; Liao, Y.L.; Nie, J.; Yuan, X.L.; Chen, F. Use of a leaf chlorophyll content index to improve the prediction of above-ground biomass and productivity. PeerJ 2019 , 6 . [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Singh, V.; Kunal Singh, M.; Singh, B. Spectral indices measured with proximal sensing using canopy reflectance sensor, chlorophyll meter and leaf color chart for in-season grain yield prediction of basmati rice. Pedosphere 2022 , 32 , 812–822. [ Google Scholar ] [ CrossRef ]
  • Zhang, J.; Feng, L.; Yao, F. Improved maize cultivated area estimation over a large scale combining MODIS–EVI time series data and crop phenological information. ISPRS J. Photogramm. Remote Sens. 2014 , 94 , 102–113. [ Google Scholar ] [ CrossRef ]
  • De la Casa, A.; Ovando, G.; Bressanini, L.; Martínez, J.; Díaz, G.; Miranda, C. Soybean crop coverage estimation from NDVI images with different spatial resolution to evaluate yield variability in a plot. ISPRS J. Photogramm. Remote Sens. 2018 , 146 , 531–547. [ Google Scholar ] [ CrossRef ]
  • Kitano, B.T.; Mendes, C.C.T.; Geus, A.R.; Oliveira, H.C.; Souza, J.R. Corn Plant Counting Using Deep Learning and UAV Images. IEEE Geosci. Remote Sens. Lett. 2019 , 1–5. [ Google Scholar ] [ CrossRef ]
  • Jhajharia, K.; Mathur, P. Prediction of crop yield using satellite vegetation indices combined with machine learning approaches. Adv. Space Res. 2023 , 72 , 3998–4007. [ Google Scholar ] [ CrossRef ]
  • Shammi, S.A.; Meng, Q. Use time series NDVI and EVI to develop dynamic crop growth metrics for yield modeling. Ecol. Indic. 2021 , 121 , 107124. [ Google Scholar ] [ CrossRef ]
  • Zhao, Y.; Vergopolan, N.; Baylis, K.; Blekking, J.; Caylor, K.; Evans, T.; Giroux, S.; Sheffield, J.; Estes, L. Comparing empirical and survey-based yield forecasts in a dryland agro-ecosystem. Agric. For. Meteorol. 2018 , 262 , 147–156. [ Google Scholar ] [ CrossRef ]
  • Zhang, H.; Wang, L.; Tian, T.; Yin, J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021 , 13 , 1221. [ Google Scholar ] [ CrossRef ]
  • Zhang, Y.X.; Walker, J.P.; Pauwels, V.R.N.; Sadeh, Y. Assimilation of Wheat and Soil States into the APSIM-Wheat Crop Model: A Case Study. Remote Sens. 2022 , 14 , 65. [ Google Scholar ] [ CrossRef ]
  • Kheir, A.M.S.; Mkuhlani, S.; Mugo, J.W.; Elnashar, A.; Nangia, V.; Devare, M.; Govind, A. Integrating APSIM model with machine learning to predict wheat yield spatial distribution. Agron. J. 2023 , 115 , 3188–3196. [ Google Scholar ] [ CrossRef ]
  • Bai, T.; Zhang, N.; Mercatoris, B.; Chen, Y. Improving Jujube Fruit Tree Yield Estimation at the Field Scale by Assimilating a Single Landsat Remotely-Sensed LAI into the WOFOST Model. Remote Sens. 2019 , 11 , 1119. [ Google Scholar ] [ CrossRef ]
  • Tie-cheng, B.; Wang, T.; Zhang, N.N.; Chen, Y.Q.; Mercatoris, B. Growth simulation and yield prediction for perennial jujube fruit tree by integrating age into the WOFOST model. J. Integr. Agric. 2020 , 19 , 721–734. [ Google Scholar ] [ CrossRef ]
  • Shi, Y.; Wang, Z.; Hou, C.; Zhang, P. Yield estimation of Lycium barbarum L. based on the WOFOST model. Ecol. Model. 2022 , 473 , 110146. [ Google Scholar ] [ CrossRef ]
  • Bellakanji, A.C.; Zribi, M.; Lili-Chabaane, Z.; Mougenot, B. Forecasting of Cereal Yields in a Semi-arid Area Using the Simple Algorithm for Yield Estimation (SAFY) Agro-Meteorological Model Combined with Optical SPOT/HRV Images. Sensors 2018 , 18 , 2138. [ Google Scholar ] [ CrossRef ]
  • Huang, J.; Sedano, F.; Huang, Y.; Ma, H.; Li, X.; Liang, S.; Tian, L.; Zhang, X.; Fan, J.; Wu, W. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation. Agric. For. Meteorol. 2016 , 216 , 188–202. [ Google Scholar ] [ CrossRef ]
  • Fattori Junior, I.M.; dos Santos Vianna, M.; Marin, F.R. Assimilating leaf area index data into a sugarcane process-based crop model for improving yield estimation. Eur. J. Agron. 2022 , 136 , 126501. [ Google Scholar ] [ CrossRef ]
  • Hu, S.; Shi, L.; Huang, K.; Zha, Y.; Hu, X.; Ye, H.; Yang, Q. Improvement of sugarcane crop simulation by SWAP-WOFOST model via data assimilation. Field Crops Res. 2019 , 232 , 49–61. [ Google Scholar ] [ CrossRef ]
  • Tang, Y.; Zhou, R.; He, P.; Yu, M.; Zheng, H.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.; Tian, Y. Estimating wheat grain yield by assimilating phenology and LAI with the WheatGrow model based on theoretical uncertainty of remotely sensed observation. Agric. For. Meteorol. 2023 , 339 , 109574. [ Google Scholar ] [ CrossRef ]
  • Li, Z.; Ding, L.; Shen, B.; Chen, J.; Xu, D.; Wang, X.; Fang, W.; Pulatov, A.; Kussainova, M.; Amarjargal, A.; et al. Quantifying key vegetation parameters from Sentinel-3 and MODIS over the eastern Eurasian steppe with a Bayesian geostatistical model. Sci. Total Environ. 2024 , 909 , 168594. [ Google Scholar ] [ CrossRef ]
  • Xue, H.; Xu, X.; Zhu, Q.; Meng, Y.; Long, H.; Li, H.; Song, X.; Yang, G.; Yang, M.; Li, Y.; et al. Rice yield and quality estimation coupling hierarchical linear model with remote sensing. Comput. Electron. Agric. 2024 , 218 , 108731. [ Google Scholar ] [ CrossRef ]
  • Pandey, D.K.; Mishra, R. Towards sustainable agriculture: Harnessing AI for global food security. Artif. Intell. Agric. 2024 , 12 , 72–84. [ Google Scholar ] [ CrossRef ]
  • Liu, Q.; Wang, C.; Jiang, J.; Wu, J.; Wang, X.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; Liu, X. Multi-source data fusion improved the potential of proximal fluorescence sensors in predicting nitrogen nutrition status across winter wheat growth stages. Comput. Electron. Agric. 2024 , 219 , 108786. [ Google Scholar ] [ CrossRef ]
  • Zhao, M.; Meng, Q.; Wang, L.; Zhang, L.; Hu, X.; Shi, W. Towards robust classification of multi-view remote sensing images with partial data availability. Remote Sens. Environ. 2024 , 306 , 114112. [ Google Scholar ] [ CrossRef ]
  • Baltodano, A.; Agramont, A.; Lekarkar, K.; Spyrakos, E.; Reusen, I.; van Griensven, A. Exploring global remote sensing products for water quality assessment: Lake Nicaragua case study. Remote Sens. Appl. Soc. Environ. 2024 , 36 , 101331. [ Google Scholar ] [ CrossRef ]
  • Zhang, H.K.; Qiu, S.; Suh, J.W.; Luo, D.; Zhu, Z. Machine Learning and Deep Learning in Remote Sensing Data Analysis. In Reference Module in Earth Systems and Environmental Sciences ; Elsevier: Amsterdam, The Netherlands, 2024. [ Google Scholar ]
  • Feng, H.; Li, Q.; Wang, W.; Bashir, A.K.; Singh, A.K.; Xu, J.; Fang, K. Security of target recognition for UAV forestry remote sensing based on multi-source data fusion transformer framework. Inf. Fusion 2024 , 112 , 102555. [ Google Scholar ] [ CrossRef ]
  • Joshi, P.; Sandhu, K.S.; Singh Dhillon, G.; Chen, J.; Bohara, K. Detection and monitoring wheat diseases using unmanned aerial vehicles (UAVs). Comput. Electron. Agric. 2024 , 224 , 109158. [ Google Scholar ] [ CrossRef ]
  • Wu, Z.; Luo, J.; Rao, K.; Lin, H.; Song, X. Estimation of wheat kernel moisture content based on hyperspectral reflectance and satellite multispectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2024 , 126 , 103597. [ Google Scholar ] [ CrossRef ]
  • Qin, P.; Huang, H.; Tang, H.; Wang, J.; Liu, C. MUSTFN: A spatiotemporal fusion method for multi-scale and multi-sensor remote sensing images based on a convolutional neural network. Int. J. Appl. Earth Obs. Geoinf. 2022 , 115 , 103113. [ Google Scholar ] [ CrossRef ]
  • Marin, D.B.; Ferraz, G.A.e.S.; Santana, L.S.; Barbosa, B.D.S.; Barata, R.A.P.; Osco, L.P.; Ramos, A.P.M.; Guimarães, P.H.S. Detecting coffee leaf rust with UAV-based vegetation indices and decision tree machine learning models. Comput. Electron. Agric. 2021 , 190 , 106476. [ Google Scholar ] [ CrossRef ]
  • López-Pérez, E.; Sanchis-Ibor, C.; Jiménez-Bello, M.Á.; Pulido-Velazquez, M. Mapping of irrigated vineyard areas through the use of machine learning techniques and remote sensing. Agric. Water Manag. 2024 , 302 , 108988. [ Google Scholar ] [ CrossRef ]
  • Hao, S.; Ryu, D.; Western, A.W.; Perry, E.; Bogena, H.; Franssen, H.J.H. Global sensitivity analysis of APSIM-wheat yield predictions to model parameters and inputs. Ecol. Model. 2024 , 487 , 110551. [ Google Scholar ] [ CrossRef ]
  • Fawakherji, M.; Suriani, V.; Nardi, D.; Bloisi, D.D. Shape and style GAN-based multispectral data augmentation for crop/weed segmentation in precision farming. Crop Prot. 2024 , 184 , 106848. [ Google Scholar ] [ CrossRef ]
  • Dos Santos, E.P.; Moreira, M.C.; Fernandes-Filho, E.I.; Demattê, J.A.M.; Santos, U.J.d.; da Silva, D.D.; Cruz, R.R.P.; Moura-Bueno, J.M.; Santos, I.C.; Sampaio, E.V.d.S.B. Improving the generalization error and transparency of regression models to estimate soil organic carbon using soil reflectance data. Ecol. Inform. 2023 , 77 , 102240. [ Google Scholar ] [ CrossRef ]
  • Goodridge, W.; Bernard, M.; Jordan, R.; Rampersad, R. Intelligent diagnosis of diseases in plants using a hybrid Multi-Criteria decision making technique. Comput. Electron. Agric. 2017 , 133 , 80–87. [ Google Scholar ] [ CrossRef ]
  • Kumar, V.; Sharma, K.V.; Kedam, N.; Patel, A.; Kate, T.R.; Rathnayake, U. A comprehensive review on smart and sustainable agriculture using IoT technologies. Smart Agric. Technol. 2024 , 8 , 100487. [ Google Scholar ] [ CrossRef ]
  • Zhou, J.; Gu, X.; Gong, H.; Yang, X.; Sun, Q.; Guo, L.; Pan, Y. Intelligent classification of maize straw types from UAV remote sensing images using DenseNet201 deep transfer learning algorithm. Ecol. Indic. 2024 , 166 , 112331. [ Google Scholar ] [ CrossRef ]
  • Prasanna Lakshmi, G.S.; Asha, P.N.; Sandhya, G.; Vivek Sharma, S.; Shilpashree, S.; Subramanya, S.G. An intelligent IOT sensor coupled precision irrigation model for agriculture. Meas. Sens. 2023 , 25 , 100608. [ Google Scholar ] [ CrossRef ]
  • Bissadu, K.D.; Sonko, S.; Hossain, G. Society 5.0 enabled agriculture: Drivers, enabling technologies, architectures, opportunities, and challenges. Inf. Process. Agric. 2024 . [ Google Scholar ] [ CrossRef ]
  • Et-taibi, B.; Abid, M.R.; Boufounas, E.-M.; Morchid, A.; Bourhnane, S.; Abu Hamed, T.; Benhaddou, D. Enhancing water management in smart agriculture: A cloud and IoT-Based smart irrigation system. Results Eng. 2024 , 22 , 102283. [ Google Scholar ] [ CrossRef ]
  • Rostami, K.; Salehi, L. Rural cooperatives social responsibility in promoting Sustainability-oriented Activities in the agricultural sector: Nexus of community, enterprise, and government. Sustain. Futures 2024 , 7 , 100150. [ Google Scholar ] [ CrossRef ]
  • Pingali, P.; Plavšić, M. Hunger and environmental goals for Asia: Synergies and trade-offs among the SDGs. Environ. Chall. 2022 , 7 , 100491. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Model NameApplication of Precision AgricultureReference
Supervised LearningNaive BayesClassification of different crop diseases, soil types, etc.; prediction of the yield of wheat, corn, and other crops.[ , ]
Logistic RegressionAssessment of the risk level of pest occurrence; prediction of the yield of wheat, corn, and other crops.[ , ]
Linear RegressionOptimization of the amount of fertilizer application to improve the prediction accuracy of wheat, corn, and other crops yield.[ , ]
Lasso RegressionDetection of the extent to which crops are attacked by diseases and insect pests.[ , ]
AdaBoosT AlgorithmClassification and identification of different crop species and detection of crop diseases and insect pests.[ , ]
Linear Discriminant AnalysisClassification of soil types, identification of crop varieties, and determination of the effects of different soil fertilities on crop growth.[ , ]
Recurrent Neural NetworkAnalysis of crop growth time series data and prediction of time series changes in crop diseases and insect pests.[ , ]
Decision TreeSelection of pest management strategies; identification of crop pest types.[ , ]
Nearest Neighbor AlgorithmIdentification of different crop varieties; evaluation of soil fertility grades.[ , ]
XGBoost AlgorithmPrediction of yield of wheat, corn, and other crops based on climate, soil conditions, and other variables.[ , ]
Long Short-Term Memory NetworkForecasting the long-term trend of crop yield based on climate variables, such as precipitation and temperature, and prediction of the outbreak of crop diseases and insect pests by time series.[ , ]
Support Vector RegressionCrop growth monitoring and modeling, using remote sensing reflectance data to predict crop leaf area index, yield, etc.[ , ]
Artificial Neural NetworkIdentification of crop diseases and insect pests; crop growth monitoring and modeling; prediction of crop leaf area index, yield, etc.[ , ]
Convolutional Neural AlgorithmIdentification of crop leaf diseases and detection of disease invasion degree of crop leaves; prediction of crop leaf area index, yield, etc.[ , ]
Random ForestIdentification of crop diseases and insect pests; crop growth monitoring and modeling; prediction of crop leaf area index, yield, etc.[ , ]
Support Vector MachineIdentification of crop diseases and insect pests; crop growth monitoring and modeling; prediction of crop leaf area index, yield, etc.[ , ]
CatBoosT AlgorithmIdentification of crop leaf diseases and detection of disease invasion degree of crop leaves.[ , ]
Ridge RegressionPrediction of soil nutrients and key nutrient content based on soil sample data.[ , ]
Random Gradient DescentOptimization of model parameters to improve the accuracy of agricultural prediction and decision-making models; application to complex agricultural system modeling and prediction.[ , ]
Semi supervised learningGenerative Semi-Supervised LearningAssessment of soil quality; prediction of soil fertility, acidity, alkalinity, etc.; prediction and control of diseases and insect pests.[ , ]
AutoencodersIdentification and classification of diseases and insect pests; assessment of the risk level of pest occurrence.[ ]
UnsupervisedCo-TrainingIdentification, classification, and risk assessment of diseases and insect pests; soil type classification.[ ]
LearningProbabilistic Graphical ModelIdentification of crop diseases and insect pests; crop growth monitoring and modeling; prediction of crop leaf area index, yield, etc.[ ]
Independent Component AnalysisIdentification, classification, and risk assessment of diseases and insect pests; soil type classification.[ ]
Anomaly Detection AlgorithmDetection of crop wilt, soil moisture, and pH anomaly.[ ]
Self-Organizing MapsClassification of crops and rapid identification of soil types.[ ]
K-Means ClusteringAccurate identification of crops.[ ]
Principal Component AnalysisAccurate classification of crops based on their growth characteristics (such as color, texture, size, etc.).[ ]
ReinforcementDeep Q-NetworkRetrieval of key growth information, such as vegetation index, to effectively monitor crop growth and development.[ ]
Policy Gradient MethodsOptimization of crop irrigation and fertilization strategies.[ ]
Q-learningOptimization of agricultural decision making and environmental interaction.[ ]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Wang, J.; Wang, Y.; Li, G.; Qi, Z. Integration of Remote Sensing and Machine Learning for Precision Agriculture: A Comprehensive Perspective on Applications. Agronomy 2024 , 14 , 1975. https://doi.org/10.3390/agronomy14091975

Wang J, Wang Y, Li G, Qi Z. Integration of Remote Sensing and Machine Learning for Precision Agriculture: A Comprehensive Perspective on Applications. Agronomy . 2024; 14(9):1975. https://doi.org/10.3390/agronomy14091975

Wang, Jun, Yanlong Wang, Guang Li, and Zhengyuan Qi. 2024. "Integration of Remote Sensing and Machine Learning for Precision Agriculture: A Comprehensive Perspective on Applications" Agronomy 14, no. 9: 1975. https://doi.org/10.3390/agronomy14091975

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Blockchain-based color medical image cryptosystem for industrial Internet of Healthcare Things (IoHT)

  • Published: 02 September 2024

Cite this article

research papers on applications of image processing

  • Fatma Khallaf 1 , 2 ,
  • Walid El-Shafai   ORCID: orcid.org/0000-0001-7509-2120 1 , 3 ,
  • El-Sayed M. El-Rabaie 1 &
  • Fathi E. Abd El-Samie   ORCID: orcid.org/0000-0001-8749-9518 1 , 4  

In recent years, the proliferation of smart devices and associated technologies, such as the Internet of Things (IoT), Industrial Internet of Things (IIoT), and Internet of Medical Things (IoMT), has witnessed a substantial growth. However, the limited processing power and storage capacity of smart devices make them vulnerable to cyberattacks, rendering traditional security and cryptography techniques inadequate. To address these challenges, blockchain (BC) technology has emerged as a promising solution. This study introduces an efficient framework for the Internet of Healthcare Things (IoHT), presenting a novel cryptosystem for color medical images using BC technology in conjunction with the IoT, Secure Hash Algorithm 256-bit (SHA256), shuffling, and bitwise XOR operations. The encryption scheme is specifically designed for an IIoT grid network computing system, relying on diffusion and confusion principles. In this paper, the proposed cryptosystem strength is evaluated against differential attacks with several comprehensive metrics. Simulation results and theoretical analysis demonstrate the cryptosystem effectiveness, showcasing its ability to provide high levels of security and immunity to data leakage. The proposed cryptosystem offers a versatile range of technical solutions and strategies that are adaptable to various scenarios. The evaluation metrics, with approximate values of 99.61% for Number of Pixels Change Rate (NPCR), 33.46% for Unified Average Changed Intensity (UACI), and 8 for information entropy, closely align with the desired ideal outcomes. Consequently, this paper contributes to the advancement of secure and private systems for medical image encryption based on BC technology, potentially mitigating the risks associated with cyberattacks on smart medical devices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research papers on applications of image processing

Similar content being viewed by others

research papers on applications of image processing

Security Optimization of Resource-Constrained Internet of Healthcare Things (IoHT) Devices Using Asymmetric Cryptography for Blockchain Network

research papers on applications of image processing

Data protection in internet of medical things using blockchain and secret sharing method

research papers on applications of image processing

Blockchain based Chaotic Deep GAN Encryption scheme for securing medical images in a cloud environment

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Data availability

All data are available upon request from the corresponding author.

Dai HN, Zheng Z, Zhang Y (2019) Blockchain for Internet of Things: A survey. IEEE Internet Things J 6(5):8076–8094

Article   Google Scholar  

Zhang R, Xue R, Liu L (2019) Security and privacy on blockchain. ACM Computing Surveys (CSUR) 52(3):1–34

Ismail L, Materwala H, Zeadally S (2019) Lightweight blockchain for healthcare. IEEE Access 7:149935–149951

Raikwar M, Gligoroski D, Kralevska K (2019) SoK of used cryptography in blockchain. IEEE Access 7:148550–148575

Nofer M, Gomber P, Hinz O, Schiereck D (2017) Blockchain. Business & Information. Syst Eng 59:183–187

Google Scholar  

Giraldo FD, Gamboa CE (2020) Electronic voting using blockchain and smart contracts: Proof of concept. IEEE Lat Am Trans 18(10):1743–1751

Gai K, Guo J, Zhu L, Yu S (2020) Blockchain meets cloud computing: A survey. IEEE Commun Surv Tutorials 22(3):2009–2030

Xu J, Wang S, Zhou A, Yang F (2020) Edgence: A blockchain-enabled edge-computing platform for intelligent IoT-based dApps. China Commun 17(4):78–87

Zhang R, Xue R, Liu L (2021) Security and privacy for healthcare blockchains. IEEE Trans Serv Comput 15(6):3668–3686

Fernández-Caramés TM, Fraga-Lamas P (2018) A Review on the Use of Blockchain for the Internet of Things. Ieee Access 6:32979–33001

Daraghmi EY, Daraghmi YA, Yuan SM (2019) MedChain: a design of blockchain-based system for medical records access and permissions management. IEEE Access 7:164595–164613

Guo R, Shi H, Zheng D, Jing C, Zhuang C, Wang Z (2019) Flexible and efficient blockchain-based ABE scheme with multi-authority for medical on demand in telemedicine system. IEEE Access 7:88012–88025

Li F, Liu K, Zhang L, Huang S, Wu Q (2021) EHRChain: a blockchain-based ehr system using attribute-based and homomorphic cryptosystem. IEEE Trans Serv Comput 15(5):2755–2765

Madine, M M, Battah, AA, Yaqoob, I, Salah, K, Jayaraman, R, Al-Hammadi, Y., ..., Ellahham, S (2020) Blockchain for giving patients control over their medical records. IEEE Access, 8, 193102–193115

Ricci L, Maesa DDF, Favenza A, Ferro E (2021) Blockchains for covid-19 contact tracing and vaccine support: A systematic review. Ieee Access 9:37936–37950

Fernandez-Carames TM, Fraga-Lamas P (2020) Towards post-quantum blockchain: A review on blockchain cryptography resistant to quantum computing attacks. IEEE access 8:21091–21116

Tao J, Ling L (2021) Practical medical files sharing scheme based on blockchain and decentralized attribute-based encryption. IEEE Access 9:118771–118781

Wang Y, Zhang A, Zhang P, Wang H (2019) Cloud-assisted EHR sharing with security and privacy preservation via consortium blockchain. Ieee Access 7:136704–136719

Yang X, Li T, Pei X, Wen L, Wang C (2020) Medical data sharing scheme based on attribute cryptosystem and blockchain technology. IEEE Access 8:45468–45476

Indumathi J, Shankar A, Ghalib MR, Gitanjali J, Hua Q, Wen Z, Qi X (2020) Block chain based internet of medical things for uninterrupted, ubiquitous, user-friendly, unflappable, unblemished, unlimited health care services (bc iomt u 6 hcs). IEEE Access 8:216856–216872

Umran SM, Lu S, Abduljabbar ZA, Zhu J, Wu J (2021) Secure data of industrial internet of things in a cement factory based on a Blockchain technology. Appl Sci 11(14):6376

Khan PW, Byun Y (2020) A blockchain-based secure image encryption scheme for the industrial Internet of Things. Entropy 22(2):175

Article   MathSciNet   Google Scholar  

Zheng Z, Xie S, Dai HN, Chen X, Wang H (2018) Blockchain challenges and opportunities: A survey. Int J Web Grid Serv 14(4):352–375

Gaurav AB, Kumar P, Kumar V, Thakur RS (2023) Conceptual insights in blockchain technology: Security and applications. In: Research anthology on convergence of blockchain, internet of things, and security. IGI Global, pp 841–851

Puthal D, Malik N, Mohanty SP, Kougianos E, Das G (2018) Everything you wanted to know about the blockchain: Its promise, components, processes, and problems. IEEE Consumer Electronics Magazine 7(4):6–14

Wang G (2021) Sok: Applying blockchain technology in industrial internet of things. Cryptology ePrint Archive

Zaid OMA, El-Fishawy NA, Nigm EM (2013) Cryptosystem Algorithm Based on Chaotic Systems for Encrypting Colored Images. Int J Comput Sci Issues (IJCSI) 10(4):215

Kaur M, Kumar V (2020) A comprehensive review on image encryption techniques. Arch Comput Methods Eng 27:15–43

Wang X, Liu C (2017) A novel and effective image encryption algorithm based on chaos and DNA encoding. Multimed Tools Appl 76:6229–6245

Zhong W, Qin C, Liu C, Li H, Wang H (2012) The edge detection of rice image based on mathematical morphology and wavelet packet. In: Proceedings of 2012 International Conference on Measurement, Information and Control (Vol. 2). IEEE, pp 801–804

Podder P, Parvez AMS, Yeasmin MN, Khalil MI (2018) Relative performance analysis of edge detection techniques in iris recognition system. In: 2018 international conference on current trends towards converging technologies (ICCTCT). IEEE, pp 1–6

Mou H, Li X, Li G, Lu D, Zhang R (2018) A self-adaptive and dynamic image encryption based on latin square and high-dimensional chaotic system. In: 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). IEEE, pp 684–690

Chen G, Mao Y, Chui CK (2004) A symmetric image encryption scheme based on 3D chaotic cat maps. Chaos, Solitons Fractals 21(3):749–761

Al-Dmour H, Al-Ani A (2015) Quality optimized medical image steganography based on edge detection and hamming code. In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, pp 1486–1489

El-Shafai W, Khallaf F, El-Rabaie ESM, El-Samie FEA (2021) Robust medical image encryption based on DNA-chaos cryptosystem for secure telemedicine and healthcare applications. J Ambient Intell Humaniz Comput 12:9007–9035

Alarifi A, Sankar S, Altameem T, Jithin KC, Amoon M, El-Shafai W (2020) A novel hybrid cryptosystem for secure streaming of high efficiency H. 265 compressed videos in IoT multimedia applications. IEEE Access 8:128548–128573

El-Shafai W, Khallaf F, El-Rabaie ESM, El-Samie FEA (2022) Proposed 3D chaos-based medical image cryptosystem for secure cloud-IoMT eHealth communication services. J Ambient Intell Human Comput 1–28

Farahat IS, Aladrousy W, Elhoseny M, Elmougy S, Tolba AE (2022) Improving Healthcare Applications Security Using Blockchain. Electronics 11(22):3786

‏Mohanty MD, Das A, Mohanty MN, Altameem A, Nayak SR, Saudagar AKJ, Poonia RC (2022) Design of smart and secured healthcare service using deep learning with modified SHA-256 algorithm. In: Healthcare (Vol. 10, No. 7). MDPI, p 1275

Khan AA, Wagan AA, Laghari AA, Gilal AR, Aziz IA, Talpur BA (2022) BIoMT: a state-of-the-art consortium serverless network architecture for healthcare system using blockchain smart contracts. IEEE Access 10:78887–78898

Egala BS, Pradhan AK, Dey P, Badarla V, Mohanty SP (2023) Fortified-chain 2.0: intelligent blockchain for decentralized smart healthcare system. IEEE Int Things J

Andrew J, Isravel DP, Sagayam KM, Bhushan B, Sei Y, Eunice J (2023) Blockchain for healthcare systems: architecture, security challenges, trends and future directions. J Netw Comput Appl 103633

El-Shafai W, Khallaf F, El-Rabaie ESM, El-Samie A, Fathi E (2022) Proposed neural SAE-based medical image cryptography framework using deep extracted features for smart IoT healthcare applications. Neural Comput Appl 1–25

El-Shafai W, Khallaf F, El-Rabaie EM, El-Samie FEA, Almomani I (2023) A multi-stage security solution for medical color images in healthcare applications. Comput Syst Sci Eng 46(3):3599–3618

Yousif SF, Abboud AJ, Radhi HY (2020) Robust image encryption with scanning technology, the El-Gamal algorithm and chaos theory. IEEE Access 8:155184–155209

Yousif SF, Abboud AJ, Alhumaima RS (2022) A new image encryption based on bit replacing, chaos and DNA coding techniques. Multimed Tools Appl 81(19):27453–27493

Liao X, Li K, Yin J (2017) Separable data hiding in encrypted image based on compressive sensing and discrete fourier transform. Multimed Tools Appl 76:20739–20753

Liao X, Shu C (2015) Reversible data hiding in encrypted images based on absolute mean difference of multiple neighboring pixels. J Vis Commun Image Represent 28:21–27

Liao X, Qin Z, Ding L (2017) Data embedding in digital images using critical functions. Signal Process Image Commun 58:146–156

Alqahtani F, Amoon M, El-Shafai W (2022) A fractional fourier based medical image authentication approach. CMC-Comput Mater Continua 70(2):3133–3150

El-Shafai W, Almomani IM, Alkhayer A (2021) Optical bit-plane-based 3D-JST cryptography algorithm with cascaded 2D-FrFT encryption for efficient and secure HEVC communication. IEEE Access 9:35004–35026

El-Shafai W, Aly M, Algarni A, Abd El-Samie FE, Soliman NF (2022) Secure and robust optical multi-stage medical image cryptosystem. CMC-Comput Mater Continua 70(1):895–913

El-Shafai W, Mesrega AK, Ahmed HEH, El-Bahnasawy NA, Abd El-Samie FE (2022) An efficient multimedia compression-encryption scheme using latin squares for securing internet-of-things networks. J Inf Secur Appl 64:103039

Faragallah, OS, Alzain, MA, El-Sayed, HS, Al-Amri, JF, El-Shafai, W, Afifi, A, ..., Soh, B (2018) Block-based optical color image encryption based on double random phase encoding. IEEE Access, 7, 4184–4194

Faragallah, OS, AlZain, MA, El-Sayed, HS, Al-Amri, JF, El-Shafai, W, Afifi, A, ..., Soh, B (2020) Secure color image cryptosystem based on chaotic logistic in the FrFT domain. Multimedia Tools and Applications, 79, 2495–2519

Faragallah OS, El-sayed HS, Afifi A, El-Shafai W (2021) Efficient and secure opto-cryptosystem for color images using 2D logistic-based fractional Fourier transform. Opt Lasers Eng 137:106333

Faragallah, O S, El-Shafai, W, Sallam, AI, Elashry, I, EL-Rabaie, ESM, Afifi, A, ..., El-sayed, HS (2022) Cybersecurity framework of hybrid watermarking and selective encryption for secure HEVC communication. J Ambient Intell Human Comput, 1–25

Helmy M, El-Shafai W, El-Rabaie S, El-Dokany IM, El-Samie FEA (2021) Efficient security framework for reliable wireless 3d video transmission. Multidimension Syst Signal Process 1–41

El-Shafai W, Almomani I, Ara A, Alkhayer A (2022) An optical-based encryption and authentication algorithm for color and grayscale medical images. Multimed Tools Appl 1–36

Download references

Acknowledgements

The authors are very grateful to all the institutions in the affiliation list for successfully performing this research work. The authors would like to thank Prince Sultan University for their support.

The authors did not receive support from any organization for the submitted work.

Author information

Authors and affiliations.

Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf, 32952, Egypt

Fatma Khallaf, Walid El-Shafai, El-Sayed M. El-Rabaie & Fathi E. Abd El-Samie

Department of Electrical Engineering, Faculty of Engineering, Ahram Canadian University, 6th October City, Giza, Egypt

Fatma Khallaf

Security Engineering Lab, Computer Science Department, Prince Sultan University, 11586, Riyadh, Saudi Arabia

Walid El-Shafai

Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia

Fathi E. Abd El-Samie

You can also search for this author in PubMed   Google Scholar

Contributions

All authors equally contributed in this work.

Corresponding author

Correspondence to Walid El-Shafai .

Ethics declarations

Ethics approval and consent to participate.

All authors contributed and accepted to submit the current work.

Competing interests

The authors have neither relevant financial nor non-financial interests to disclose.

Conflict of interest

The authors declare that they have no conflicts of interests.

Consent for publication

All authors accept to submit and publish the submitted work.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Khallaf, F., El-Shafai, W., El-Rabaie, ES.M. et al. Blockchain-based color medical image cryptosystem for industrial Internet of Healthcare Things (IoHT). Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-023-16777-w

Download citation

Received : 10 October 2022

Revised : 20 June 2023

Accepted : 31 August 2023

Published : 02 September 2024

DOI : https://doi.org/10.1007/s11042-023-16777-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical images
  • Blockchain (BC)
  • Internet of Medical Things (IoMT)
  • Internet of Healthcare Things (IoHT)
  • Healthcare applications
  • Cybersecurity

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) Application of Image Processing in Real World

    research papers on applications of image processing

  2. (PDF) Review Paper On Image Processing

    research papers on applications of image processing

  3. (PDF) Digital image processing and applications

    research papers on applications of image processing

  4. Digital Image Processing Research Proposal [Professional Thesis Writers]

    research papers on applications of image processing

  5. 😊 Research paper on digital image processing. Digital Image Processing

    research papers on applications of image processing

  6. (PDF) Application of Image Processing in Agriculture: A Survey

    research papers on applications of image processing

VIDEO

  1. 2018 IEEE Transactions on Image Processing topics with abstract

  2. Introduction to Image Processing using OpenCV

  3. 001 Natural Language Processing (NLP)

  4. Digital Image Processing using MATLAB IMAGE INTERPOLATION

  5. STEP 1-End

  6. Enhancing Edge Processing: Imagers with In-pixel Processors

COMMENTS

  1. Image Processing: Research Opportunities and Challenges

    Image Processing: Research O pportunities and Challenges. Ravindra S. Hegadi. Department of Computer Science. Karnatak University, Dharwad-580003. ravindrahegadi@rediffmail. Abstract. Interest in ...

  2. (PDF) A Review on Image Processing

    Abstract. Image Processing includes changing the nature of an image in order to improve its pictorial information for human interpretation, for autonomous machine perception. Digital image ...

  3. (PDF) Advances in Artificial Intelligence for Image Processing

    AI has had a substantial influence on image processing, allowing cutting-edge methods and uses. The foundations of image processing are covered in this chapter, along with representation, formats ...

  4. Techniques and Applications of Image and Signal Processing : A

    This paper comprehensively overviews image and signal processing, including their fundamentals, advanced techniques, and applications. Image processing involves analyzing and manipulating digital images, while signal processing focuses on analyzing and interpreting signals in various domains. The fundamentals encompass digital signal representation, Fourier analysis, wavelet transforms ...

  5. Image processing

    Image processing is manipulation of an image that has been digitised and uploaded into a computer. Software programs modify the image to make it more useful, and can for example be used to enable ...

  6. Deep learning models for digital image processing: a review

    Within the domain of image processing, a wide array of methodologies is dedicated to tasks including denoising, enhancement, segmentation, feature extraction, and classification. These techniques collectively address the challenges and opportunities posed by different aspects of image analysis and manipulation, enabling applications across various fields. Each of these methodologies ...

  7. Image Processing Technology Based on Machine Learning

    Machine learning is a relatively new field. With the deepening of people's research in this field, the application of machine learning is increasingly extensive. On the other hand, with the advancement of science and technology, graphics have been an indispensable medium of information transmission, and image processing technology is also booming. However, the traditional image processing ...

  8. Developments in Image Processing using Deep learning and Reinforcement

    The present study thoroughly explores essential and recent improvements, applications, and advancements within the sphere of image processing, offering insights into a domain characterized by continual and swift evolution. Additionally, the paper delineates prospective avenues for future research in this dynamic field.

  9. Advances in image processing using machine learning techniques

    The paper 'Ship Images Detection and Classification Based on Convolutional Neural Network with Multiple Feature Regions', by Zhijing Xu, Jiuwu Sun, and Yuhao Huo (SPR-2021-10-0144.R2), presents an exciting application of image recognition and classification in the maritime industry to cope with significant challenges for intelligent ship ...

  10. Home

    The journal is dedicated to the real-time aspects of image and video processing, bridging the gap between theory and practice. Covers real-time image processing systems and algorithms for various applications. Presents practical and real-time architectures for image processing systems. Provides tools, simulation and modeling for real-time image ...

  11. Digital Image Processing: Advanced Technologies and Applications

    A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the ...

  12. Recent trends in image processing and pattern recognition

    The Call for Papers of the special issue was initially sent out to the participants of the 2018 conference (2nd International Conference on Recent Trends in Image Processing and Pattern Recognition). To attract high quality research articles, we also accepted papers for review from outside the conference event.

  13. Grand Challenges in Image Processing

    Introduction. The field of image processing has been the subject of intensive research and development activities for several decades. This broad area encompasses topics such as image/video processing, image/video analysis, image/video communications, image/video sensing, modeling and representation, computational imaging, electronic imaging, information forensics and security, 3D imaging ...

  14. Techniques and Challenges of Image Segmentation: A Review

    Image segmentation, which has become a research hotspot in the field of image processing and computer vision, refers to the process of dividing an image into meaningful and non-overlapping regions, and it is an essential step in natural scene understanding. Despite decades of effort and many achievements, there are still challenges in feature extraction and model design. In this paper, we ...

  15. Application of artificial intelligence algorithms in image processing

    In order to achieve better image processing effect, this paper focuses on the application of artificial intelligence algorithm in image processing. Image segmentation is a technology that decomposes images into regions with different characteristics and extracts useful targets. ... After the practice and research of image processing, the ...

  16. 471383 PDFs

    All kinds of image processing approaches. | Explore the latest full-text research PDFs, articles, conference papers, preprints and more on IMAGE PROCESSING. Find methods information, sources ...

  17. Viewpoints on Medical Image Processing: From Science to Application

    This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii ...

  18. Deep Learning-based Image Text Processing Research

    Deep learning is a powerful multi-layer architecture that has important applications in image processing and text classification. This paper first introduces the development of deep learning and two important algorithms of deep learning: convolutional neural networks and recurrent neural networks. The paper then introduces three applications of deep learning for image recognition, image ...

  19. Study on Image Filtering -- Techniques, Algorithm and Applications

    Image processing is one of the most immerging and widely growing techniques making it a lively research field. Image processing is converting an image to a digital format and then doing different operations on it, such as improving the image or extracting various valuable data. Image filtering is one of the fascinating applications of image processing. Image filtering is a technique for ...

  20. Top 1287 papers published in the topic of Image processing in 2023

    Brain Tumor Diagnosis using Image Fusion and Deep Learning. 22 Mar 2023. TL;DR: In this paper , brain tumor images are used with a discrete cosine transform-based fusion approach to create fused pictures, which can enhance the quality of the final images and hence enhance classifier performance.

  21. Real-time intelligent image processing for the internet of things

    Overall, the eleven papers appearing in this special issue demonstrate multiple perspectives and approaches with implications for the theories, models, and algorithms used in real-time image processing and its IoT applications. These papers identify frameworks and techniques for artificial intelligence and deep learning, helping the field to ...

  22. Integration of Remote Sensing and Machine Learning for Precision ...

    A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the ...

  23. Development Model Based on Visual Image Big Data Applied to Art

    This paper aims to explore the application of visual image big data (BD) in art management, and proposes and develops a new art management model. First of all, this study conducted extensive research on the overview and application of big data, focusing on analyzing the characteristics of big data and its characteristics and application methods in art management.

  24. (PDF) Studies on application of image processing in ...

    1. Studies on application of image processing in vario us fields: An. overview. T Prabaharan, P Periasamy,V Mugendiran,Ramanan. 1 Research Scholar, St. Peter's Institute of Higher Education and ...

  25. An improved multi‐scale YOLOv8 for apple leaf dense lesion detection

    In order to enhance the detection accuracy of multi-scale disease spots, this paper proposes a more suitable method based on YOLOv8. The proposed approach is validated on a dataset containing eight kinds of apple leaf disease instances in complex field scenarios.

  26. Artificial Intelligence Image Processing Based on Wireless Sensor

    In addition, the popularity of mobile networks enables more users to participate in environmental monitoring, obtain more data through crowdsourcing, and improve the breadth and depth of research. With the advancement of image processing and artificial intelligence technology, it is possible to combine these technologies with wireless sensor ...

  27. Applications of image processing algorithms on the modern digital image

    Abstract. Digital image processing technology is one of the most vital areas of computer science discipline. Its application areas involve computer-aided design, Fourier transformation, three ...

  28. Application of Wiener Filter Based on Improved BB Gradient Descent in

    Iris recognition, renowned for its exceptional precision, has been extensively utilized across diverse industries. However, the presence of noise and blur frequently compromises the quality of iris images, thereby adversely affecting recognition accuracy. In this research, we have refined the traditional Wiener filter image restoration technique by integrating it with a gradient descent ...

  29. Implementation of Wavelet Transform Analysis Filter Using FPGA

    Discrete Wavelet Transform (DWT) represents an important mathematical tool in the last decades in signal processing applications. Therefore, DWT is widely used in several domains like signal and image processing, compression, statistics, computer vision, and in data communication… etc. [1,2,3].Various transmission systems nowadays like WiFi (IEEE 802.11) and WiMAX (IEEE 802.16) are based on ...

  30. Blockchain-based color medical image cryptosystem for industrial

    Algorithm (1): Color Medical Image Security for Smart IoT E-Healthcare Applications. 1) Initialize the algorithm. 2) Obtain the output processing result. 3) Set up the BC cloud service among network elements. 4) Identify image capturing devices as nodes. 5) Perform initial checks for the image sent to the network. 6)