Remote sensing (RS) has become indispensable for Earth observation, environmental monitoring, and geospatial analysis. However, satellite imaging systems often produce low-resolution (LR) imagery due to inherent hardware limitations, data transmission constraints, and fundamental trade-offs between spatial, spectral, and temporal resolutions. In this context, Single Image Super-Resolution (SISR) techniques provide a promising computational alternative by reconstructing HR images from single LR observations, thereby enhancing spatial detail without requiring sensor upgrades. This thesis presents a progressive and domain-specific investigation into the development of SISR techniques tailored for RS images. The research begins with a classical edge-aware reconstruction framework that combines probabilistic graphical modeling with phase-based feature extraction. Specifically, a modified Markov Random Field (MRF) model is introduced, guided by 2D Phase Congruency maps to enhance edge representation during patch similarity search. To ensure perceptual fidelity, a texture prior is incorporated into the joint compatibility function, and a new metric&mdashImage Euclidean Distance (Ieuc)&mdashis proposed for improved feature space matching. Further, the thesis proposes a sparse representation-based SISR framework using a hybrid overcomplete dictionary trained on self-learned features. Addressing the limitations of traditional gradient-only approaches, the method integrates FFT-based frequency components with first and second-order spatial gradients to better preserve complex edge profiles such as ramp,
delta, and roof edges. The joint LR-HR dictionary is trained using Orthogonal Matching Pursuit (OMP) and K-SVD algorithms. During inference, LR patches are sparsely encoded using the learned dictionary atoms. Comparison of the results highlights the effectiveness of hybrid feature modeling for remote sensing SISR. To overcome the limited ability of traditional models to capture multi-scale spatial patterns, the research advances into deep generative modeling with
the development of Res2Net-SRGAN&mdasha GAN-based architecture incorporating Res2Net blocks into the generator. These blocks facilitate hierarchical multi-scale feature extraction within residual units, enabling better reconstruction of fine textures and structural details common in RS imagery. The model is optimized using a composite loss function combining adversarial, content, perceptual, and total variation losses. Extensive experiments on the UCMerced dataset show that Res2Net-SRGAN outperforms bicubic, SRGAN, ESRGAN, and EDSR in PSNR, SSIM, and perceptual quality (LPIPS), producing sharper textures and more accurate edge reconstructions. Finally, the thesis explores Reference-based Super-Resolution (RefSR), wherein an external high-resolution reference image is used to guide the super-resolution process. To address challenges such as reference misalignment and poor edge transfer, a novel RefSR framework is
proposed, comprising two enhancement streams: a Texture Enhancement (TE) module powered by a trainable autoencoder, and an Edge Enhancement (EE) module guided by domain-specific priors. Central to this design is the Deep Feature Attention Module (DFAM)&mdasha newly proposed attention mechanism that selectively transfers semantically relevant features while suppressing noise and misalignment artifacts. Unlike static VGG-19 feature encoders, the autoencoder-based approach adapts to remote sensing-specific content. In summary, this thesis provides a comprehensive trajectory of techniques&mdashfrom probabilistic and sparse modeling to deep generative learning&mdashspecifically adapted to the challenges of remote sensing SISR. The proposed methods address key issues including edge fidelity, texture realism, and scale-aware feature learning, making a significant contribution to the advancement of super-resolution for Earth observation imagery.