Skip to content

Panoroma stiching using homography estimated by classical and deep learning based methods

Notifications You must be signed in to change notification settings

N-Raghav/Deep-Homography-Estimation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Deep Homography Estimation

Performing panaroma stiching of images using homography estimated using classical and deep learning methods.

References

  1. Deep Image Homography Estimation Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich (2016) Read Paper
    Base architecture for the Supervised HomographyNet.

  2. Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model Tyng-Wei Chu, Jhen-Hao Chen (2017) Read Paper
    Basis for the Unsupervised loss function (TensorDLT and Spatial Transformer).

Phase 1: Traditional Approach

Phase 1 of the project is structured as below:

  • Phase1/Code/: Contains the implementation code for image stitching and blending.
    • Phase1/Code/Wrapper.py: Main script to run the panorama stitching pipeline.
    • Phase1/Code/Corners.py: Implements Shi-Tomasi corner detection.
    • Phase1/Code/ANMS.py: Implements Adaptive Non-Maximal Suppression for corner selection.
    • Phase1/Code/FeatureDescriptor.py: Implements Feature descriptor generation.
    • Phase1/Code/Blending.py: Implements multi-band blending for panorama generation.
    • Phase1/Code/FeatureMatchingAndRansac.py: Handles feature matching and RANSAC outlier rejection between images.
  • Phase1/Data/: Contains training and testing datasets.
    • Phase1/Data/Train/: Training datasets for development and testing.
    • Phase1/Data/Test/: Testing datasets for evaluation.
  • Phase1/Output/: Directory where output panoramas will be saved.

Phase 2: Deep Learning Approach

Phase 2 implements a deep learning-based approach for image stitching using convolutional neural networks. The phase includes both supervised and unsupervised training methods.

  • Phase2/Code/: Contains the deep learning implementation code.
    • Phase2/Code/Wrapper.py: Main script for supervised learning pipeline.
    • Phase2/Code/Train.py: Training script for the learning models.
    • Phase2/Code/Train_supervised.py: Training script for supervised learning model.
    • Phase2/Code/Train_unsupervised.py: Training script for unsupervised learning model.
    • Phase2/Code/Test.py: Testing script for the learning models.
    • Phase2/Code/Test_supervised.py: Testing script for supervised learning model.
    • Phase2/Code/Test_unsupervised.py: Testing script for unsupervised learning model.
    • Phase2/Code/Result_Generator.py: Generates results and metrics from predictions.
    • Phase2/Code/DataGenerator.py: Generates synthetic training data with random patches and homography perturbations.
    • Phase2/Code/Blending.py: Implements image blending for stitched panoramas.
    • Phase2/Code/Network/: Contains neural network architecture definitions.
    • Phase2/Code/config/: Configuration files for training and testing.
    • Phase2/Code/GenerateGrid.py: Generates a grid visualization of panoramas.

Dependencies

  • Python 3.x
  • NumPy
  • OpenCV
  • Matplotlib
  • PyTorch
  • TensorBoard
  • Scikit-image

Instructions to Run

Phase 1: Traditional Approach

Run the following command from the root directory of the project to execute the panorama stitching pipeline:

python Phase1/Code/Wrapper.py

The code automatically reads images from the Phase1/Data/Train/ and Phase1/Data/Test/ folders, processes them, and saves the output images at each step and the final panoramas in the Phase1/Output/ directory.

Each image is saved as Phase1/Output/StepName/SetName/ImageName.jpg, where StepName is the processing step (e.g., Corners, ANMS, etc), SetName is the dataset folder name (e.g., Set1, Set2, etc), and ImageName is the image file name.

Phase 2: Deep Learning Approach

Training a Model

Run the following command to train the model:

python Phase2/Code/Train.py

Testing a Model

Run the following command to test the trained model:

python Phase2/Code/Test.py

Running Inference

To run inference with a trained model:

python Phase2/Code/Inference.py

Generating Results

To generate comprehensive results and metrics:

python Phase2/Code/Result_Generator.py

Command Line Arguments

Phase 2: Train.py

  • --BasePath: Base path of images (default: Phase2/Data)
  • --CheckPointPath: Path to save Checkpoints (default: Phase2/Checkpoints/)
  • --ModelType: Model type - choose from Sup or Unsup (default: Unsup)
  • --NumEpochs: Number of epochs to train for (default: 200)
  • --DivTrain: Factor to reduce training data by per epoch (default: 1)
  • --MiniBatchSize: Size of the mini-batch to use (default: 32)
  • --LoadCheckPoint: Load model from latest checkpoint? 0 or 1 (default: 0)
  • --LogsPath: Path to save logs for TensorBoard (default: Phase2/Logs/Supervised/)
  • --RunName: Name of the run appended to LogsPath for TensorBoard (default: Normalized_200)
  • --CheckpointFreq: Frequency in epochs to save checkpoints (default: 50)
  • --ModelVersion: Model version to use - choose from Original or Improved (default: Original)

Example usage:

python Phase2/Code/Train.py --BasePath Phase2/Data --ModelType Sup --NumEpochs 100 --MiniBatchSize 64

Phase 2: Test.py

  • --ModelPath: Path to load trained model from (default: /home/chahatdeep/Downloads/Checkpoints/144model.ckpt)
  • --BasePath: Path to load images from (default: /home/chahatdeep/Downloads/aa/CMSC733HW0/CIFAR10/Test/)
  • --LabelsPath: Path of labels file (default: ./TxtFiles/LabelsTest.txt)
  • --ModelVersion: Model version to test - choose from Original or Improved (default: Original)
  • --ModelType: Model type - choose from Sup or Unsup (default: Sup)
  • --SavePath: Path to save output images (default: Phase2/Results)

Example usage:

python Phase2/Code/Test.py --ModelPath Phase2/Checkpoints/final_model.ckpt --BasePath Phase2/Data/Test --ModelType Sup

Phase 2: Inference.py

  • --ModelPath: Path to trained model (required)
  • --DataPath: Path to image folder (required)
  • --SavePath: Where to save the panorama (default: Phase2/Results/)

Example usage:

python Phase2/Code/Inference.py --ModelPath Phase2/Checkpoints/final_model.ckpt --DataPath Phase2/Data/Phase2Pano/unity_hall

Phase 2: Wrapper.py

  • --ModelPath: Path to load trained model from (default: Phase2/final_models/Supervised_old_200.ckpt)
  • --ModelType: Model type - choose from Supervised or Unsupervised (default: Supervised)
  • --ModelVersion: Model version - choose from Original or Improved (default: Original)
  • --DataPath: Path to dataset folder (default: Phase2/Data/Phase2Pano/unity_hall)

Example usage:

python Phase2/Code/Wrapper.py --ModelPath Phase2/Checkpoints/final_model.ckpt --ModelType Supervised --DataPath Phase2/Data/Phase2Pano/unity_hall

About

Panoroma stiching using homography estimated by classical and deep learning based methods

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages