Abstract: The automated interpretation of satellite imagery is a significant challenge in the field of remote sensing and computer vision. This paper presents a deep learning-based approach for translating raw satellite images into simplified, map-style representations and analyzing land cover types. Using a conditional Generative Adversarial Network (Pix2Pix), the model learns the mapping between paired satellite and map images, producing visually coherent outputs that preserve key geographical structures. Further, a post-processing module performs land cover classification into land, water, and vegetation categories. The system is deployed with a user-friendly interface using Streamlit, enabling real-time image processing and visualization. The results demonstrate high visual accuracy and practical usability, indicating strong potential for applications in urban planning, environmental analysis, and geospatial intelligence.
|
DOI:
10.17148/IARJSET.2025.125343