Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: preshanth/SAM-RFI
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: preshanth/SAM-RFI
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: sam2_dinov2_unet
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 2 commits
  • 26 files changed
  • 1 contributor

Commits on Oct 6, 2025

  1. Add LoRA support for parameter-efficient fine-tuning │

    │                                                                                          │
    │   Changes:                                                                               │
    │   - Add peft>=0.7.0 to dependencies (pyproject.toml)                                     │
    │   - Add LoRA configuration options to training_config.yaml                               │
    │   - Implement LoRA wrapper in sam2_trainer.py (lines 203-223)                            │
    │   - Pass LoRA params through run_training.py                                             │
    │   - Create 10K dataset generation config (synthetic_train_10k.yaml)                      │
    │   - Create LoRA training config (training_lora_10k.yaml)                                 │
    │   - Fix np.load memory leak in BatchedDataset (sam_dataset.py)                           │
    │   - Reduce cache multiplication (num_workers: 4→2, cache_size: 4→2)                      │
    │                                                                                          │
    │   LoRA targets q_proj/v_proj in attention layers (rank=16, alpha=32).                    │
    │   Enables training on vision encoder while keeping base weights frozen.                  │
    │                                                                                          │
    │   Usage:                                                                                 │
    │     python scripts/run_training.py --config configs/training_lora_10k.yaml
    preshanth committed Oct 6, 2025
    Configuration menu
    Copy the full SHA
    6fab7be View commit details
    Browse the repository at this point in the history
  2. Updating to move to the SAM2-NeXT type of framework to combine dinov2…

    … and SAM2 for fine RFI detection
    preshanth committed Oct 6, 2025
    Configuration menu
    Copy the full SHA
    7047e67 View commit details
    Browse the repository at this point in the history
Loading