Empowering Sustainable Agriculture: An Enhanced Deep Learning Model for PD Detection in Agricultural Operation System

. A country's financial growth is prejudiced by its rate of agricultural output. Nevertheless, Plant Diseases (PD) pose a substantial obstacle to the cultivation and value of foodstuff. The timely detection of PDs is paramount for public wellness and Sustainable Agriculture (SA) promotion. The conventional diagnostic procedure entails a pathologist's visual evaluation of a particular plant through in-person visits. Nevertheless, the manual inspection of crop diseases is limited due to its low level of accuracy and the limited availability of skilled workers. To address these concerns, there is a need to develop automated methodologies capable of effectively identifying and classifying a wide range of PDs. The precise detection and categorization of PDs pose a challenging task due to various factors. These include the presence of low-intensity data in both the image's backdrop and the forefront, the significant similarity in color between normal and diseased plant regions, the presence of noise in the specimens, and the variations in the location, chrominance, framework, and dimensions of plant leaves. This paper presents a novel approach for identifying and categorizing PDs using a Deep Convolutional Neural Network - Transfer Learning (DCNN-TL) technique in the Agricultural Operation System (AOS). The proposed method aims to enhance the capabilities of SA in accurately identifying and categorizing PDs. The improved Deep Learning (DL) methodology incorporates a TL technique based on fine-tuned Visual Geometry Group 19 (VGG19) architecture. The revised system accurately detects and diagnoses five distinct PD categories. Among the evaluated methods, the proposed DCNN-TL in this study shows outstanding precision, recall, and accuracy values of 0.996, 0.9994, and 0.9998, respectively.


Introduction
The emergence of smart agriculture, facilitated by the convergence of Internet of Things (IoT) devices, sensors, and cutting-edge analytics, has presented novel opportunities for continuous monitoring and data-centric decision-making.The focus of the study revolves around the development and execution of an improved DL framework that is customized to effectively address the unique requirements associated with detecting PDs within the AOS.This model seeks to address the present shortcomings in disease detection accuracy within agricultural ecosystems by integrating advanced DL algorithms with the inherent challenges of these systems.The ultimate goal is to empower farmers to take preventative steps and optimize the use of resources.
PDs have the potential to directly impede growth, resulting in detrimental consequences for crop productivity.A global estimation suggests that the economic loss could reach a maximum of $25 billion annually [1].The presence of diverse conditions poses a significant challenge for researchers, primarily due to the geographic variations that can impede the precise identification process.Furthermore, conventional approaches predominantly depend on experts, practical knowledge, and instructional materials.However, many of these methods tend to be costly, time-consuming, and labor-intensive, making it challenging to achieve accurate detection [2].Hence, there is an immediate need for a swift and precise method to detect and classify PDs, as it holds significant implications for agricultural enterprises and ecological sustainability.The rapid development of internet technologies, specifically the accessibility of multimodality data derived from diverse sensors such as the IoT and sensor networks, has been notable.This study proposes a novel model for PD detection utilizing a DL algorithm.This model aims to address the challenges mentioned above.The function encompasses plant image extraction, image segmentation, and disease identification, employing an enhanced DL algorithm.The initial objective involves retrieving plant images, although several factors present challenges in accurately identifying them, including the soil composition and the level of illumination within the intricate environment [3].
Machine Learning (ML) is a well-established method within the field of DL that facilitates the ability of machines to exhibit behaviors akin to those of humans.By employing these methodologies, ML exhibits a responsive behavior wherein they acquire knowledge and utilize it as a basis for subsequent learning and implementation [4].ML is an interdisciplinary field of study encompassing various research domains, one of which is agriculture.These techniques can be applied across diverse computation domains, facilitating the creation and advancement of novel algorithms.These algorithms are subsequently employed in diverse agricultural contexts to detect crop diseases at an earlier stage and classify them according to their respective disease types.
Convolutional Neural Networks (CNNs), a prominent DL technique, have demonstrated exceptional performance in image classification, leading to significant accomplishments [5].To tackle these concerns, we must enhance our comprehension of current methodologies through the systematic observation, measurement, and analysis of extensive agricultural data.Comprehending the technologies utilized in both short-term agricultural management and large-scale ecosystems is imperative.Big data analysis is considered one of the sophisticated methodologies within the realm of DL.Typically, a DL model consists of a minimum of three layers, wherein each layer is connected to data features through neurons, generating increasingly intricate information.DL models facilitate the acquisition of input features through the hierarchical organization of interconnected networks of neurons [6].
The research presented in this study holds importance beyond precision agriculture's scope.It aligns with the wider objective of promoting environmentally friendly farming methods that prioritize ecological awareness [7].The objective is to decrease dependence on chemical inputs, mitigate crop damage, and contribute to sustainable agricultural output by providing farmers with an effective tool for rapid disease identification and intervention.The proposed model not only conforms to the standards of precision agriculture but also emphasizes the crucial role of technology in promoting agricultural tolerance amidst shifting climate trends and emerging PDs.
Given the current trend of the global population explosion and the subsequent rise in food demand, the agricultural sector is confronted with significant challenges regarding sustainability, efficiency, and food security [8].To tackle these issues, it is imperative to seek novel solutions, and the progress made in technology, specifically in the field of DL, offers a promising prospect to transform agricultural methodologies.This study is grounded in the pressing necessity to alleviate crop failures resulting from diseases, which significantly peril worldwide food production.The integration of DL techniques presents a promising opportunity to revolutionize the agricultural landscape, as conventional disease detection and management approaches may be inadequate in delivering rapid and precise interventions.

Related works
Technology, particularly DL, has opened new doors for agricultural reform.PDs affect global food security, so advanced technologies are needed to create accurate, effective, and environmentally sustainable disease detection systems.This survey examines the methodologies, implementations, and results of DL models designed for PD detection, which have made significant contributions.Subudhi et al. (2023) proposed a sustainable farming method.This method uses AI to visualize hyperspectral imaging data interactively.AI is used to process hyperspectral imaging data [19] interactively.Visualizing sustainable agricultural practices helps farmers make informed decisions.Results show improved hyperspectral data interpretation, improving crop health indicator identification.Farmers' empowerment through easily accessible visualization tools promotes sustainable agricultural practices.However, drawbacks like the need for specialized equipment and expertise must be considered.
In their 2020 study, Hernández and López introduced uncertainty quantification to PD detection.This was done using Bayesian DL [10].The proposed method trains disease detection models using Bayesian DL and quantifies uncertainty.The results include disease predictions and uncertainty quantifications.The outcome values show improved disease detection reliability, especially in uncertain situations.This approach improves model resilience, giving farmers more reliable diagnostic insights.Bayesian methods' computational complexity may be a drawback in this paper.Arsenovic et al. (2019) addressed DL constraints in PD detection.The methodology improves and optimizes DL models to overcome limitations [11].The implementation process includes model training and evaluation on many datasets.The result is a refined DL method for PD detection.Outcome values are more precise and resilient.One benefit of this approach is its ability to overcome limitations.It may require a lot of computational power, which is a drawback.Sujatha et al. (2021) compared DL and ML for PD detection.The methodology involves training and testing DL and ML disease detection models [12].Implementation involves feature extraction and model training.The output compares DL and ML for disease detection.The outcome values show that deep learning achieves higher accuracy and may improve disease detection.However, interpretability may be a problem with these models.
Kumar et al. ( 2023) developed a lightweight, reliable DL model for PD detection.The methodology emphasizes lightweight models for low-resource environments.Implementation includes model optimization and validation [13].A reliable, computationally efficient deep learning model is the result.Results show improved efficiency on lowcomputational devices.This approach works well in low-resource settings.However, this method may have drawbacks, such as a trade-off between model complexity and accuracy.
Nagasubramanian et al. ( 2019) used explainable 3D-DL on hyperspectral images to identify PDs.To identify diseases, the method uses advanced 3D-DL.Models are trained using hyperspectral image data during implementation [14].The output clarifies disease identification.Outcome values show how well the model is understood.Explainability in agricultural models helps farmers understand model decisions.This approach may increase computational complexity, which is a drawback.Researchers specifically used transfer learning methods for analysis [15].The study uses transfer learning to modify pre-existing models for disease detection.The implementation process refines models using grape and mango datasets.The result is a leaf disease detection solution for precision agriculture.The results show that transfer learning works in agriculture.This method improves model generalization.Domain adaptation limitations present some challenges.
Ahmed and Reddy (2021) proposed a mobile DL system to diagnose plant leaf diseases.This study aims to create an easy-to-use mobile app for disease detection.DL models are integrated into the mobile platform during implementation [16].The portable system detects diseases while people are moving.The results show that mobile disease detection is feasible.This technology offers real-time monitoring and accessibility.However, mobile device and network connectivity dependence may be drawbacks.This literature review examined the convergence of DL and SA, focusing on PD detection in the AOS.The survey presents various research methods and findings, showing how technology-based approaches to PDs have evolved [9].

Identifying and categorizing PDs using a DCNN-TL technique
This paper presents an innovative method to the categorization of PD.The scheme depicted in Fig. 1 can identify and categorize five different categories: healthy, narrow brown spot, leaf blast, brown spot, and bacterial leaf blight.

Dataset
The method's identification and categorization performance is assessed using the PlantVillage database [17].The Plantvillage database is a comprehensive and publicly accessible dataset that serves as a standard reference for categorizing PDs.It is widely utilized by various existing methodologies to assess their performance.To assess the resilience of the proposed methodology, a sequence of experiments have been conducted on a database containing various plant categories and their corresponding diseases.The dataset utilized in this study referred to as PlantVillage, comprises a total of 55,416 images depicting plant leaves.These images encompass 13 distinct healthy plant classes and 25 diseased plant classes, representing 13 different species of plants.The samples for all 13 crop species, such as Tomato, Potato, Apple, Grape, and others, have been obtained from the database.The dataset contains a diverse range of samples exhibiting variations in angle, size, color, light, blurring, and noise.This diversity contributes to the database's suitability for PD identification.Fig. 2 displays a limited number of samples extracted from the Plantvillage dataset [18].

Preprocessing
The image augmentation technique has been employed to improve the eminence of the initial dataset, while extension techniques were utilized to expand the size of the dataset.Flattening and enhancing image detail leads to flattening and intensifying image contrast.This process is achieved by manipulating the edge-aware local contrast.The preservation of strong edges is achieved using a technique that establishes a threshold value, which acts as a minimum intensity amplitude, thereby ensuring their integrity.In the present study, the threshold has been established at 0.15, while the augmentation value was set at 0.5.An anisotropic diffusion filter is employed within the procedure of contrast flattening.The process of relocating the zero-frequency component to the center of the spectrum is accomplished through the utilization of the Fourier transform.
In ML research, it is paramount for researchers to make diligent efforts to prevent overfitting.This paper proposes employing data augmentation to augment the dataset size, mitigating the overfitting risk.Data augmentation is a straightforward procedure wherein slight modifications are applied to the original images to generate novel images.In this study, we employ various techniques, namely rotation, transformation, and scale-in/scale-out methodologies.Three straightforward techniques generate novel images that exhibit a strong correlation with the original images.The term "rotation" implies a procedure involving rotating the initial image.The images are subjected to rotation within a +17 to -17 degrees range.The scale-in/scale-out process involves adjusting the zoom level, allowing for both zooming in and out.In this context, we employ a scaling factor ranging from 107% to 117% for height and width dimensions.Subsequently, translation is performed by displacing the image along the x and y axes.In this context, the images are translated from -7 to +17.

DCNN-TL model with enhanced VGG 19 framework
The training and testing processes necessitate substantial computational resources and significant storage capacity, particularly in cases where metadata is incorporated.On the other hand, the fine-tuning procedure employed in TL-based models presents a valuable method for optimizing resource allocation through "network surgery" to perform feature extraction.Fine-tuning improves the original architecture and reduces memory utilization.Constructing and confirming a CNN model, involving the iterative exploration of various parameters like the number of layers, and number of nodes, can be complex.
Multiple techniques exist for fine-tuning CNNs, encompassing architectural modifications, model restructuring, and partial layer freezing to leverage pre-trained weights.The procedure of fine-tuning primarily involves the following key steps: • Pre-training the CNN.
• The final output layer is shortened, and all framework plans and parameters are replicated to create a new CNN.• The head of the CNN is substituted with a series of Fully Connected (FC) layers.Next, the model parameters are set arbitrarily.• The output layer is trained afresh, with all parameters adjusted following the original model.
The VGG is a CNN architecture that consists of multiple levels.The VGG-16 and VGG-19 architectures comprise 16 and 19 Convolutional Layers (CL), respectively.The construction of these architectures involves the utilization of convolutional filters of small dimensions to enhance the depth of the network.Both the VGG16 and VGG19 models require an input image of dimensions 226 × 226, consisting of three color frequencies.The input is fed into CL with a minimal receptive field size of 3 × 3, followed by max-pooling layers.Utilizing the ReLU activation function in the VGG network reduces training period for the initial two VGG sets, which consist of conv3-64 and conv3-128 layers, respectively.
To preserve spatial resolution, a 2 × 2 window is utilized in each max-pooling layer that follows a set of CL with a step of 2 (representing the number of pixel shifts across the input matrix).Moreover, there exists a disparity in the number of frequencies employed in the CL, ranging from 64 to 512.DenseNet, an extension of ResNet, incorporates the technique of multilayer feature concatenation across all subsequent layers.This approach effectively streamlines the training of DL networks by decreasing the overall number of factors in the framework that is being learned.Avoiding a straight aggregation of previous layers leads to a decline in the model's efficiency.The present study involves the implementation of the DenseNet-201 architecture, which consists of 204 deep layers.This architecture comprises four dense blocks, each consisting of a combination of 1 × 1 and 3 × 3 CL.Following each dense block, a subsequent conversion block consists of a 1 × 1 CL and a 2 × 2 pooling layer.It is important to note that the final block deviates from this pattern and concludes with a classification layer incorporating a 7 × 7 average pool.The preceding block is succeeded by an FC network comprising four output nodes.
The VGG19 network comprises 16 CL, each of which is trailed by a ReLU activation function.Following the CL, a linear classifier consisting of three FC layers is implemented, with a dropout rate of 50% applied between the first and second FC layers.The initial two possess 3996 distinct characteristics, whereas the final exhibits six features.The learning rate is 1×10 -4 and the batch size is 250.
In this study, two stages of fine-tuning have been implemented.The proposed approach employed fine-tuned TL for the VGG19 model to identify.The initial step involves the process of immobilizing all layers responsible for feature extraction while subsequently releasing the FC layers that are responsible for classification.In contrast, the subsequent phase entails immobilizing the initial feature extraction layer while simultaneously reactivating the final feature extraction in conjunction with the FC layers.The subsequent phase necessitates additional training and an extended time; nevertheless, it is anticipated to yield superior outcomes.At this subsequent stage, the VGG16 model undergoes a partial freezing process, where the first ten layers are immobilized while the residual layers are subject to re-training for fine-tuning.

Results and discussion
Accurate identification of multiple PDs holds significant importance in developing a robust framework for the automated recognition and classification of PD.We conducted a series of experiments to assess the proposed technique's localization efficacy.All samples of the PlantVillage dataset have been thoroughly tested.The reported results demonstrate that enhanced VGG19 exhibits a high level of accuracy in detecting and recognizing PDs across diverse categories.Additionally, the proposed methodology demonstrates resilience against various post-processing manipulations, such as noise addition, light adjustments, and image distortions.The VGG19 technique can effectively detect and localize multiple PDs through its localization ability.These metrics facilitate the evaluation of the system's performance in identifying various types of PDs.The findings from both the visual and numeric analyses provide evidence that the proposed methodology is effective in correctly identifying and categorizing PDs.
Accuracy, precision, recall, and F1-score are metrics commonly used to evaluate the performance of classification models.The significance of these metrics becomes particularly pronounced when dealing with imbalanced class distributions, as relying solely on accuracy may not yield a comprehensive evaluation of a model's performance.The evaluation metrics provide a more comprehensive assessment, particularly in situations where varying errors have distinct implications.4 shows the relative analysis of precision, recall, and accuracy for various DL techniques for PD detection.Among the evaluated methods, the proposed DCNN-TL in this study demonstrates outstanding precision, recall, and accuracy values of 0.996, 0.9994, and 0.9998, respectively.This observation highlights the exceptional capability of the model in accurately detecting positive instances while minimizing both FP and FN, resulting in a nearly flawless level of accuracy.Kumar et al. ( 13) also exhibit robust performance, achieving precision, recall, and accuracy values of 0.9935, 0.9936, and 0.9935, respectively.The proposed DCNN-TL demonstrates superior performance compared to previous methodologies, such as those presented by Nagasubramanian et al. [14], Rao et al. [15], and Ahmed et al. [16].This highlights the potential of DCNN-TL as an advanced and dependable approach for accurately detecting PDs.The findings of this study emphasize the importance of the proposed DCNN-TL in pushing forward the current knowledge and techniques in deep learning-based methods for AOS.

Conclusion
This paper introduces a new methodology for identifying and classifying PDs within an AOS by utilizing a DCNN-TL technique.The proposed methodology aims to augment the efficacy of SA in precisely detecting and classifying PDs.The enhanced DL methodology integrates a TL technique with a fine-tuned VGG19 architecture.The proposed revised system exhibits a notable level of precision in identifying and classifying five distinct categories of PDs.The DCNN-TL method proposed in this study exhibits exceptional precision, recall, and accuracy metrics, with values of 0.996, 0.9994, and 0.9998, respectively.Future research will involve developing a comprehensive DL system that utilizes drone and IoT technology.This system will be subjected to practical testing in real- life, real-time scenarios.Furthermore, our research endeavors will persist in the quest for the most effective DL methodology to accurately diagnose all known plant leaf diseases.Similarly, within the realm of agriculture, our research endeavors encompass the investigation of various plant leaf diseases that hold comparable significance to the wellbeing of humanity.

,Fig. 1 .
Fig. 1.DCNN-TL technique using VGG 19 for PD detectionThe system under consideration is among a limited number of systems documented in existing literature that can classify five categories.Most papers in the existing body of literature typically consist of a range of 2 to 4 distinct classes.The suggested method utilizes DCNNs for TL.As part of the preprocessing stages, the images undergo several transformations, including elimination of background, resizing, and improvement.Data extension or augmentation is additionally conducted to improve the database's size.Data augmentation is employed, wherein trivial adjustments are pragmatic to the original images to produce novel and distinct images.Minor modifications can encompass rotational adjustments, scaling in or out, and translations.The features are subsequently extracted utilizing the VGG19 model.Feature reduction is achieved by utilizing the flattened, dense, and softmax layers within the VGG19 architecture.The final layers of the VGG19 model are responsible for performing the classification task.The suggested method is assessed using the metrics of accuracy, precision, and F1-measure.

,Fig. 3 .Fig. 4 .
Fig.3.Class-wise performance of the proposed DCNN-TL using fine-tuned VGG19 for AOS Fig.3depicts the class-wise performance of the proposed DCNN-TL using fine-tuned VGG19 for AOS.The X-axis within the dataset corresponds to a distinct crop disease, while the y-axis represents the metrics associated with the particular disease.Significantly, the model exhibits outstanding performance across various categories, attaining elevated metrics levels.In the Tomato-Healthy class, the model achieves a perfect precision, recall, and F1score of 1, which indicates a precise and comprehensive identification of healthy tomato leaves.Consistent and remarkable outcomes are evident across various diseases and crops, including Potato, Apple, and Grape.The high precision values indicate a low occurrence of false positives, while high recall values demonstrate the model's ability to identify a substantial proportion of true positive instances accurately.The F1-score is a metric combining precision and recall using the harmonic mean, offering a balanced evaluation highlighting the resilience of the proposed DCNN-TL.This study's findings highlight the model's efficacy in classifying diseases, indicating its potential as a dependable instrument for monitoring plant health within the AOS.