Accuracy Measurement of Hyperspectral Image Classification in Remote Sensing with the Light Spectrum-based Affinity Propagation Clustering-based Segmentation

░ ABSTRACT - The area of remote sensing and computer vision includes the challenge of hyperspectral image classification. It entails grouping pixels in hyperspectral pictures into several classes according to their spectral signature. Hyperspectral photographs are helpful for a variety of applications, including vegetation study, mineral mapping, and mapping urban land use, since they include information on an object's reflectance in hundreds of small, contiguous wavelength bands. This task's objective is to correctly identify and categorize several item categories in the image. Many approaches have been stated by several researchers in this field to enhance the accuracy of the segmentation and accuracy. However, fails to attain the optimal accuracy due to the intricate nature of the images. To tackle these issues, we propose a novel Modified Extreme Learning machine (M-ELM) approach for the credible hyperspectral image classification outcomes with the publicly available Kaggle datasets. Before the classification, the input images are segmented using the Light Spectrum-based modified affinity propagation clustering technique (LSO-MAPC). In the beginning, the images are pre-processed using the non-linear diffusion partial differential equations technique which effectively pre-processed the image spatially. Experiments are effectuated to analyze the performance of the proposed method and compared it with state-of-art works in a quantitative way. The proposed approach ensures a classification accuracy of 96%.


░ 1. INTRODUCTION
Hyperspectral imaging (HSI) [1] technology examines a broad range of radiation, instead of only designating the three major colors (red, green, and each pixel.To offer additional details about the image depicted, the illumination impacting every molecule is divided into a variety of spectral ranges.Multispectral data [2] sets typically consist of 5 to 10 bands with very high frequencies (70-400 nm), while hyper-spectrum information collections often consist of 100 to 200 wavelengths with comparatively modest frequencies (5-10 nm).The whole wavelength range is collected and processed during hyperspectral image acquisition [3].
To locate things, recognize resources, or observe procedures imaging aims to get the wavelengths for every single bit in a picture of an object.Compared to multispectral data, data has better features for identifying and differentiating specific characteristics or entities.Due to their ability to acquire limited bandwidth knowledge, they can give comprehensive data regarding any item.Precise resolution of wavelengths is a feature of hyperspectral detectors.The atmosphere, distance, and undersea vehicles employ ultraviolet sensors to collect precise wavelength metrics for a variety of uses.It is easier to distinguish between various characteristics on the globe's surface when using indicators, which collect pictures in a large number of confined, continuous wavelengths.
In addition to synthetic aperture radar (SAR) [4] and traditional data, it is an additional information format.Contrary to multidimensional space missions, do not require radiation to provide light for their images.Sending repeated radio waves to light a target region, then listening for and collecting the bounces of each repetition results in the creation of a picture.It uses wavelengths with different frequencies that are not always connected, whereas imaging uses confined, often continuous optical bands [5] with potentially thousands or even hundreds of components.A condensed portion of the technique is what mapping is, the discipline of imaging and visual computing includes the challenge of image classification [6], which entails grouping pixel values in pictures into multiple categories according to their optical pattern.
The decomposition of the observed visual range of hyperspectrum information into its component fingerprints and an assortment of matching proportionate quantities.It is increasingly used to track the growth and wellness of produce, despite the expense of capturing hyperspectral pictures often being expensive for others and in certain regions.The development of a system to notify people early for outbreaks of illnesses and the use of scanning instrumentation to identify a specific variety are both ongoing projects.Image segmentation [7] is frequently used to identify boundaries and items in pictures, described in more exact terms, is the method of giving every molecule in a picture a name to ensure that objects with an identical tag have specific properties.
Segmenting wellness reasons and biological pictures using encoder-decoder designs: Among the two most often utilized structures for clinical and biological visualization are U-Net and V-Net [8], the primary use of U-Net is for the division of pictures from organic imaging.In context with these issues, we propose an innovative Modified Extreme Learning machine (M-ELM) approach for hyperspectral image classification and LSO-MAPC-based segmentation.Major contributions are, • The proposed approach is to enhance the hyperspectral image classification accuracy with a lower error rate and therein significantly reduces the complexity.• The images from the Kaggle dataset are pre-processed using the non-linear diffusion partial differential equation the spatially remove the noise and smoothen the images.• The segmentation of the images is effectuated with the innovative Light spectrum optimization-based Modified Affinity Propagation clustering approach which improves the segmentation accuracy.• The proposed Modified Extreme Learning machine (M-ELM) can be utilized to enhance the classification accuracy with a lower error rate.
The remainder of the work is arranged as follows: in section 2 the existing work of hyperspectral image classification with their challenges and advantages.In section 3 the proposed work is elucidated briefly.The experimental analysis is explained in section 4. The conclusion section is summarized in section 5.

░ 2. LITERATURE SURVEY
Hong et al. [9] have presented graph convolutional networks (GCN) in the investigation and display of inconsistent information.The method constructs an enhancement of the current system to make them more suitable for the hyperspectral classification of images task after thoroughly analysing both networks from four distinct angles.A universal throughout its entirety merging system is produced by our final introduction of three distinct fusion algorithms.The suggested method is more adaptable in that it can forecast the fresh input data points, or the out-of-sample instances, without the network needing to be retrained.However, it is insufficient to establish more sophisticated fusion segments.
Hang et al. [10] have described an attention-aided Convolutional neural networks (CNN) model to categorize hyperspectral pictures spectrally and spatially.To acquire attributes, tiny cubes are often first cut from the hyperspectral picture and then input into the spectral-spatial method.The ability to discriminate between various spectral wavelengths and geographic positions within the cubes.If thoroughly examined, this past knowledge will aid in enhancing networks' potential for learning.It efficiently enhances the performance of the system.Nevertheless, it is challenging to fully investigate the inherent characteristics of information.
Zheng et al. [11] have implemented a fast patch-free global learning (FPGA) framework to classify hyperspectral images.
To determine the significance of the attribute maps, monochromatic consideration includes modelling the mutual dependence of the map elements.This guarantees adequate use of the repetitive wavelength data and broad geographical data.With a lower priority on optimization, the identification pattern is gradually recovered using a portable coder.It would improve the rapidity and precision of spectral categorization.Thus, global optimization of the entire system and adequate use of global geographical information are not possible.
Liu et al. [12] have developed multitask deep learning technique in the open world (MDL4OW) that carries out categorization and restoration in the real environment concurrently.The rebuilt data gets contrasted to the source values; those that cannot be reconstituted are categorized as unidentified because designations prevent them from being accurately captured in the hidden qualities.To distinguish between the unidentified and identified classes, a level of significance must be established.The reliability of hyperspectral image categorization with unidentified classes is determined effectively.Still, it will be challenging to recognize the unclear divisions if the categorization method covers all fundamental land cover components.
Gao et al. [13] have demonstrated a multiscale residual network (MSRN) for the classification of spectral images.In contrast to those processes employing multiscale components, those using single-scale provides typically only perform multiscale feature mining once, making it difficult to fully understand the depiction of spectral-spatial features at various scales.Which can carry out multiscale feature extraction at different levels of the network.It can carry out multiscale feature extraction at different levels of the network.Thus, will also have an impact on the application effect that occurs.

░ 3. PROPOSED METHODOLOGY
The proposed methodology is for the classification and detection of hyperspectral images and our approach is based on the Modified affinity propagation clustering (MAPC) based segmentation and a novel M-ELM for the classification of segmented hyperspectral images.Before these two phases, the images are spatially pre-processed using the non-linear diffusion partial differential equations.The overlay of the proposed approach is interpreted in figure 1.

Pre-processing
In the hyperspectral images, the noises are removed using the spatial pre-processing approach which also smoothens the images.This approach often enhances the spatial texture information and therein significantly improves the classifier accuracy [19].

Modified Affinity Propagation Clustering (MAPC)
The proposed innovative MAPC is used for the segmentation of images taken.Previously, it is observed that the segmentation of images had improved the classification of HSI accuracy and a large generation of segmented regions might push the classification to predict wrongly.For this reason, we proposed a novel MAPC algorithm [20].Utilizing the MAPC might be deemed the pixel of the images as a potential exemplar and created the sub-segments.The main reason for choosing the MAPC is that there is no need to determine the sub-segments earlier and it automatically chooses based on the specific issue.
To begin the segmentation of the images the sub-segmented regions are created with the correlation among each pixel.The similarity index (, ) is measured for each pixel of k to determine the exact exemplar of pixel i.The measured Euclidean distance is not only responsible for the correlation of pixels but also easily determined in the search space.Hence, we have stated a negative Euclidean distance and henceforth, the similarity index (, ) can be formulated as follows, The location of pixel i and j are determined as   and   respectively.After the completion of the similarity index of each pixel, the proposed approach utilizesthe message-passing technique.The data between two pixels are exchanged in two forms namely availability and responsibility.The steps involved in this are elucidated below, At first, the initial value of the availabilities of the pixels (, ) is regulated as 0 and followed by the measurement of responsibilities as (, ).

𝑅(𝑖
However, the measurement of self-availability is different from eq. ( 3) and it is outlined as follows [21], The damping factor is determined as  which lies on the interval of 0 and 1.Subsequently, the pixel l can be considered as an exemplar if l=i.The aforementioned procedure follows until the measured exemplar got unchanged for more than two iterations.Thus, the segmentation of hyperspectral images is formed automatically with the proposed MAPC.

Light Spectrum Optimizer (LSO)
The metrological phenomenon inspires the Light Spectrum Optimizer (LSO).The assumptions are depending upon LSO such as (i) Candidate solutions are represented using every colorful ray, (ii) The ranges from 40o to 42o is the light ray's dispersion, (iii) The global best solution is the light ray's population that reaches so far to the best dispersion, (iv) Randomly control the reflection and refraction and (v) Comparing to best so far fitness solution, initial phases with the rainbow colors curves are managed via the fitness of current solutions.LSO mathematical representation is outlined below;

Initialization
For the initialization of white random light are started using the LSO search process [22].
A random vector  1 uniformly moves to the ranges [0, 1] with the initial solution  0 → .Where d is the dimension of a problem with upper and lower bounds are   and   .

Light rays based on the dispersion of various colors
The LSO exportation, exploitation, scattering of colourful rays, and the direction of the rainbow spectrum are the mechanism as follows;

Rainbow spectrum direction
The below formulas explain the normal vectors of outer refraction, inner reflection, and inner refraction are given as, The current population means is   with the solution  → ( = 1, . . . ., ).The population size is M with 0 → the incident light ray.

Exploration scheme (Create new colorful ray)
Randomly create the probability r of solutions ranges [0, 1].Equation ( 13) computes the new solution candidates.
Create the new candidate solution and the current solutions around  →  .Uniform random vectors around 2  .The scaling factor is .

Exploitation scheme (Scattering or colorful rays)
The exploitation operator is improved with the current population with the chosen solution.
The current solution and the best so far solution to the new position.

LSO-MAPC-based Segmentation of hyperspectral images
The proposed LSO-MAPC utilizes an exclusion scheme and convergence predictions are effectuated for the optimal allocation of computing resources for the segmented images.The former is used to avert the overlapping issues that occur while performing the clustering process and the latter process assist to stagnate the segmented images that are converged.

Proposed Modified Extreme Learning Machine (M-ELM) based Hyperspectral image classification
The hyperspectral image classification is effectuating with the utilization of our proposed robust approach known as M-ELM [24].The main use of M-ELM is that it prevents the initialization procedures followed in input and hidden layers.
The weight vectors that are interconnected with the input and t th hidden neuron are  , and the hidden neuron interlinked with the output are taken as   .The hidden layer output is implied concerning the input y and the bias of the t th hidden neuron  .Meanwhile, the initiated hidden neurons are much lower than the overall training instances and thus created the non-square matrix with the parameter S [23,24].With this condition theis unavailable, however, it is ineluctable to compute the value for measurement of the output neuron's weight.
For our proposed approach the measured value using  * is an essential need and is represented as Moore-Penrose generalized inverse utilized for the matrix S. Subsequently, the positive value is appended to the ELM as the kernel value while measuring the  value as shown, Here, is the user-defined value, and the feature mapping in the hidden layer is performed to reveal the kernel function of the ELM.The kernel function of ELM is implied as q and is formulated as,   =   :  , = ℎ(  ).ℎ(  ) = (  ,   ) Here the kernel function is taken as (  ,   ) and its respective output is, For the proposed M-ELM several kernel functions used here are tangent, Gaussian, and wavelet functions.The soft plush parameter of ELM is interpreted as, For the improvement of hyperspectral classification accuracy our proposed approach is utilized with the mitigation of the error rate.Accuracy is also enhanced with the reduction of complexity of the procedures that are followed.
The proposed M-ELM is used to train and test the images from the dataset that are originally selected.The proposed approach handles the classification pixel-wise and also determines the pixel-wise classification map.Moreover, the proposed approach also detects the super pixels based on the segmentation performed by the proposed innovative LSO-MAPC approach.This ensures the classification of hyperspectral images based on the category and attains higher classification accuracy.

░ 4. RESULT AND DISCUSSION
The implementation platform of Python investigates the experimental outcomes related to proposed hyperspectral image segmentation and classification.The effectiveness of the proposed hyperspectral image categorization method was measured overall using statistical parameters.The population size of LSO is 100 with the maximum number of iterations is 1being.

Dataset Description
Dataset images for this study are collected from the Kaggle dataset (https://www.kaggle.com/c/ipsa-ma511/data).There are 103 spectral bands with 310 x 340 pixels present in the dataset to train hyperspectral images.The train_y.npy determines the respective image label.Figure 3 tabulates the hyperspectral classification results in which the classified objects are differentiated using various kinds of colors.

Performance Investigation
The state-of-art result of SSIM is illustrated in Figure 4. Stateof-art approaches like GCN [9], CNN [10], FPGA [11], MSRN [13], and proposed methods are used to check the hyperspectral image segmentation from SSIM.Regarding the methodologies GCN [9], CNN [10], FPGA [11], and MSRN [13], the proposed framework outperformed an SSIM result of 88%, 90%, 88%, 84%, and 95%.This plot demonstrates that the proposed work surpassed the existing techniques in terms of SSIM results.The comparative study for the number of iteration Vs accuracy is illustrated in figure 5. Figure 6 shows the comparison analysis for the number of iterations vs. specificity.The comparison study for iterations Vs sensitivity is shown in figure 7. Table 1 illustrates the computational time comparison.Investigating the outcome of computational time is covered by existing methods like GCN [9], CNN [10], FPGA [11], and MSRN [13] with the proposed approach.In comparison to other methods like GCN [9], CNN [10], FPGA [11], and MSRN [13], the proposed methodology required less computing time to complete the task.Proposed approach 2.12s Figure 8 plots the comparative study for the number of iterations Vs MCC.The effectiveness of current approaches such as GCN [9], CNN [10], FPGA [11], MSRN [13], and proposed methods are used to show the MCC of hyperspectral categorization.Regarding the number of iterations, the proposed method outperformed MCC levels of 93%, 94.63%, 94.78%, and 95.56%.The proposed system overtook the previous GCN [9], CNN [10], FPGA [11], and MSRN [13] techniques in terms of MCC.The robustness of Light Spectrumbased Affinity Propagation Clustering-based Segmentation is a multifaceted aspect that depends on the interplay of data characteristics, parameter choices, and the specific demands of the application.Careful consideration of these factors, along with thorough validation and testing, is essential to assess and improve the robustness of the segmentation approach in practice.

Figure 2 :
Figure 2: The framework of M-ELM used for the classification of hyperspectral images

Figure 3 :
Figure 3: Hyperspectral classification4.2Evaluation MeasuresVarious evaluation measures are taken here to check the efficiency of the proposed framework using statistical parameters like structural similarity index (SSIM) accuracy (A), sensitivity (Sen), specificity (Spec), Mathew's Correlation Coefficient (MCC), and so on to check the capability of proposed techniques in terms of both segmentation and classification.The hyperspectral image segmentation is measured as, Structural Similarity Index (SSIM):Structural Similarity Index measures the similarity between two images.It ranges from -1 to 1, where 1 indicates a perfect match.

Figure 5 :Figure 6 :
Figure 5: Comparative study for the number of iteration Vs accuracy

Figure 8 :
Figure 8: Comparative study for the number of iterations Vs MCC ░ 5. CONCLUSION This paper presented a novel modified extreme learning machine-based hyperspectral image classification.Dataset images are gathered from publicly available Kaggle datasets that are implemented using Python software.SSIM, accuracy, sensitivity, specificity, and MCC are used as the evaluation parameters in experiments to statistically analyze the effectiveness of the proposed approach and compare it to existing approaches.The result of the proposed work outlined 95% SSIM, 96% accuracy, 95.67% specificity, 95.56% sensitivity, and 95.56% MCC to compare with the previous techniques such as GCN, CNN, FPGA, and MSRN.While analysing computational time, this work offers a minimum of 2.12s time.